Return to Index

General AI - 2018

This page contains notes about neuroscience, AI research and neural networks. Do not expect any explanation of the facts presented here. It is not a learning material, it is more of an inspirational text (mainly just my notes). Whenever I learn something interesting, I will write it here. Work in progress...

Last update: 30 March 2018

Approach options

AI as a program programed by programmers has little to no chance at being intelligent. It was a huge hit at the end of the 19th century. Most notable thing: Watson (practically a huge database). Now, this approach is mostly forgotten.

Perceptron/neural networks are quite successful at simple tasks - voice recognition, image recognition, market prediction. But the learning is relatively slow (back propagation alghorithms) and requires a big database of learning data and a huge amount of computing power. They are still far away from any form of general intelligence as we know it.

Spikig neural networks are trying to simulate the brain activity more precisely than perceptrons. They seems to be as powerful as perceptron networks (or more), but there is no efficient learning alghorithm.

HTM by Jeff Hawkins is probably the best model of intelligence we have. It builds on a premise, that intelligence is not an alghorithm, but a memory system that is capable of predicting and making choices according to those predictions. It is heavily inspired by human brain and right now it is able to generalize information, recognice learned sequences, etc. In many cases it outperforms neural networks in terms of data efficiency.

Various brainologists are trying to create perfect mathematical model of neuron and map the entire brain network. This bottom up approach is certainly valid, but I don't think it is useful right now. We do not have a supercomputer to run this simulation.

Computational power

Humans have ~100G neurons in the brain.
~80% is in the cerebellum. Humans do not need cerebellum.
~26G neurons (lets round that to ~20G) are in the neocortex, which is arguably all we need.
~10G neurons are in one hemisphere. Humans are still intelligent with only one hemisphere.
Average cortical neuron fires between 0.29 and 1.82 per second. Lets round that to 0.5 Hz.
We need ~5G neuron firing calculations per second.
Average neuron has ~1k synapses. If neuron firing means passing signal to all synapses, then
we need ~5T synapse calculations per second. Doable?

We could also zoom out a bit. Cortical column is possibly the smallest computational block in our brains. When neurons fire, they usually activate all other neurons in the column, or at least the supragranular part of the column. With this premise, one could assume that the cortical column is active about as often as an average neuron (lets double that for safety to 1Hz). So for ~100M cortical columns in one hemisphere
we need ~100M column activation computations per second.

Interesting facts

Synapses grow more early in life.

Brain uses spartial data representation.

Besides input/output cells, cerebellum has only inhibitory neurons.

In a study by Arif Hamid and Joshua Berke, dopamine is presumed to be linked with learning and motivation.

My unanswered questions

Learning: synaptic growth or synaptic plasticity? Or Both?

Does the spartial representation have any absolutely necessary attributes required for intelligence? Isn't dense data representation more efficient?

Is dopamine the product of the ultimate brain evaluation function used for reinforcement learning? What about other neurotransmitters, namely serotonin?

Friendly AI. Do we want the general AI to have similar evaluation function (kindness, morality...) as humans? How bad could it be, if the general AI would have had a completely different evaluation function?