Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Monday, November 21, 2005

Super-computer trounced by human brain

IBM's Blue Gene super-computer has performed a record 280.6 trillion operations per second on the industry standard LINPACK benchmark. That is more than 10^14 operations per second.

The human brain contains around 10^11 brain cells, so Blue Gene could in principle deploy about 10^3 operations per second on simulating each brain cell in a simulation of the whole brain.

However, each brain cell has a lot of biological computing machinery associated with it (e.g. its large number of synapses, the dendritic tree that connects the synapses to the cell body, the cell membrane, etc), and the time scale for brain cell dynamics is around a millisecond, so the 10^3 operations per brain cell per second that Blue Gene gives you isn't anywhere near enough compute power for a real-time simulation of the brain.

If each brain cell (including all of its biological computing machinery) needs around 10^9 operations per second (1 per millisecond, times a conservative 10^6 to account for all the large number of synapses and the dendritic tree) , then the total number of operations per second that are needed for the whole brain is around 10^20 (10^9 operations per brain cell, times 10^11 brain cells).

Thus, conservatively 10^20 operations per second are needed for a real-time simulation of the human brain, but Blue Gene can supply only 10^14 operations per second. There is rather a large shortfall.

Clearly, we are nowhere near being able to simulate all of the neurons in the human brain in real time. Even if Blue Gene was fast enough, it would still lose out in terms of processing power per cubic millimetre. Blue Gene occupies a very large room (think "supermarket", or have a look at the photo here), whereas the human brain occupies a cranium.

And don't even think about comparing the relative energy requirements of Blue Gene and the human brain.

Blue Gene achieves its speed by connecting together a large number of conventional microcomputers, each of which uses a variant of a more-or-less standard computer architecture (the so-called von Neumann architecture). In the other hand, the human brain does not use anything remotely like this approach to the problem of doing computations. In effect, it uses special purpose biological hardware for each and every processing node (i.e. each brain cell plus its associated machinery), and this biological hardware operates as a fine-grained parallel computer.

To compete with a human brain, a remote descendent of Blue Gene will have to operate in a similarly fine-grained parallel way, and will thus have to be built using a much less clunky technology than tens of thousands of interconnected microcomputers.

I believe that the solution to this problem will emerge from nanotechnology, and it will use something analogous to artificial DNA to orchestrate the building of fine-grained parallel computers.

6 Comments:

At 27 November 2005 at 00:44, Anonymous Anonymous said...

"Thus, conservatively 10^20 operations per second are needed for a real-time simulation of the human brain, but Blue Gene can supply only 10^14 operations per second. There is rather a large shortfall."

Blue Gene is "only" out by a factor of 10^6. Assuming Moore's Law holds for another decade or so, in 18 years we will have computers that are 2^12 = 4096 times faster.

Assume further we are not aiming for realtime simulation, but instead intend for a one year project to simulate a full day of activity for a human brain. 365 days of simulating at 4096 times the speed would give us the 1.5 x 10^6 factor increase needed.

From that perspective, the hardware requirements are quite achievable, but writing software that could simulate the human brain as efficiently as the real thing is the real difficult. Still, we have 18 years to see what we can do.

 
At 27 November 2005 at 02:33, Blogger Stephen Luttrell said...

I also want to see real-time simulation of human brain-like processing. Not only that, but I want to see it done on a desktop computer. It will take a while before that happens! I agree that the software is a real challenge. We need to get away from the idea of writing the software ourselves, and to move towards writing software that writes software for us, in effect.

 
At 27 March 2008 at 16:24, Anonymous Anonymous said...

But why bother - nature has already done it.

 
At 27 March 2008 at 18:06, Blogger Stephen Luttrell said...

Presumably, you are saying that nature has already evolved brains, so what is the point of us repeating this "exercise"? The point is that we will eventually be able to do better than nature so it is worth us putting in the effort to achieve this goal, though we are quite a way from it at the moment, both on the hardware and the software fronts.

I believe that we will have to merge our concepts of hardware and software into a hybrid approach which is largely able to program itself (i.e. self-organise), rather like the way nature has evolved how things work in brains.

I define human "technology" as part of "nature" (I can't think of another consistent definition), so the creation of artificial brains is a consequence of natural (but very sophisticated) processes.

 
At 28 June 2009 at 11:55, Anonymous Ferdinand Balfoort said...

The question is not whether we can replicate the workings of the brain, because mechanically I think it is possible, and, considering Moore's Law and nanotechnology, it may not be far off in terms of total human evolution.

Before we get too excited about the potential, I think we should first endeavour to understand and maximize the full potential of our own brains, especially its ability to evolve to respond to environmental and evolutionary requirements.

In my opinion, for example, we are underutilizing and under stimulating the development of the neo cortex at the risk of being beholden to the limbic system, the older and more reptilian brain.

In my mind there is a great risk if it were possible to replicate the brain in parallel with a major and material deficiency in the neo cortex development, something which Einstein grappled with extensively during the development of those rather crude human tools before and during the second world war culminating in the dropping of the H bombs on Japan.

I am deliberately using the term "crude" to show that human beings have been unable to control even such basic developments and avoid major cataclysms. Imagine marrying a carbon copy of our brain with the predominantly reptilian brain, untamed conclusively as yet by the neo cortex and its emphasis on ethics, morals and values, and I think you will agree it may be better to focus on the basics first before creating another genie which is difficult to put back in the lamp.

Ferdinand C Balfoort (BCA,CA,CIA)

 
At 28 June 2009 at 12:33, Blogger Stephen Luttrell said...

I think the Singularity Institute for Artificial Intelligence addresses your worries. One thing that they are concerned about is to ensure that the "seed AI" that starts off down the route towards super-human AI has "values" that ensure that it is friendly towards us biological humans.

As you have observed, it is very easy to imagine a "seed AI" that goes down the wrong route, and which then quickly becomes hostile to us. Unless we prepare the ground well then what can go wrong will go wrong.

 

Post a Comment

<< Home