Friday, March 23, 2012

On Intelligence & Consciousness

I'm reading Jeff Hawkins' book 'On Intelligence.' Hawkins was a designer of the Palm Pilot and is clearly a sharp, creative engineer. He took some time to catch up on the literature of the scientific understanding of the brain, reading it with an engineer's eye. His conclusion is that traditional McCullough-Pitts neural networks are missing the most powerful features of the neural structure of at least the neocortex (his main focus and probably the main seat of our information-processing superiority). These features are coupled, but I will explain them singly.

The first is that each layer of neurons should be looking for patterns, and be able to find a denser way of representing some abstractable characteristics. These characteristics would not vary significantly across widely varying input types. One example he gives is melody - it consists of notes, and one abstraction above you can find that it consists of intervals, some of which may be repeated. The abstraction is not the full original sound, but does give important information about the melody, and gains some generality in that it may be possible to use the same set of intervals in listening to another melody.

The second feature is that as a set of neurons recognise their particular pattern, higher-level neurons fire stimulate lower, less abstract neurons to lower their activation threshold. They are predicting, based on a pattern of in-layer connections, the likely information the lower layer should expect. This is very helpful in two ways
- first, the forward work of recognising and handling (abstracting) gains power because if the note is distorted or faint, the weaker signal will still be able to stimulate the recognition.
- second, the power of disconfirming results is greatly increased and so the learning rate will be improved.

Hawkins calls this 'predictive memory.' He has founded a company Numenta, and they have released some demo software for their implementation of 'Hierarchical-Temporal Memory.' There isalso a demo of some vision applications, in which it performs quite well.

There is a very interesting parallel with Integrated Information Theory by Andrea Tononi, which proposes that Consciousness could be computed as an entropy measure on all the current neuron-states against their probable states. If there is lots of disconfirming or just unpredicted information being integrated and learned from, the measure would be quite large. He gives a measure called Phi, and, while acknowledging he is probably only creating a mathematical cartoon, suggests that if something like this were the case it would perhaps be useful ethically. His theory transforms consciousness from something we don't understand into some maths that are probably wrong and a computation that would be impossible to perform. Quite an achievement.
Published with Blogger-droid v1.6.7

No comments:

Post a Comment

This is your chance to be heard, really heard! Finally the world will take you seriously. So do try to post something worthwhile.