Henry Markham, Ray Kurzweil and the Artificial Brain

A Popular Fairy Tale

Comparing human brains to computers has become something of an industry lately. Like nuclear fusion, an emergent silicon mind seems to be always just around he next corner. The poster boy for this idea is Ray Kurzweil. If you want to watch a movie imagining precisely this vision, take a look at Transcendence. Transcendence is a tolerably good movie but brain capture fails in the movie for truly stupid reasons (plot spoiler) because it apparently fails to capture the "soul" and not due to any fundamental difficulty involved in creating a mind in a machine.

The monstrously expensive and spectacular failure of the Blue Brain project was conceived by Henry Markham. You can view his version of this fairy tale on TED. [1]

So How Big a Computer Do We Need?

The 19 million volumes in the US Library of Congress represent about 10 trillion characters - 10,000,000,000,000 characters. To make the analogy even approximately apt, we need to imagine each character in each book being a tiny, super-powerful computer with an operating system of millions of lines of code. As long as each of these tiny computers is 1,000 times faster than any computer we could ever build, and as long as the computers can communicate with each other at better than internet speed, we are getting into sight of the computational power of the human brain.

The Super Computer Between Your Ears


Let's take a look at how your brain "computes".

The neuron's closest analogue in a computer is the CPU chip (central processing unit), not the memory. Your computer may have terabytes of memory on board but that's almost irrelevant. Bits in memory are "dead". They only become useful when run through the CPU for processing. So we are talking about 100 billion CPU's -- one each per neuron. That's more than the number of CPU's on the planet at the moment. Per brain.

But this isn't quite right either. The neuron has complex behavior based on its genetic "programming", inputs, outputs and a bath of mostly unknown enzymes. Like every cell in the body, the neuron is as complex as a jumbo jet. You could probably model it with reasonable success with a complex program of some sort. So we basically have 100 billion PC's. That's full-scale computers consisting of on or more CPU's, a big chunk of memory (Terabytes),  and a few million lines of code. Each.

Synapse Simulation

But wait!

Active, dynamic "thinking" in the brain is not controlled the neuron. It's about how one neuron is influences another. Each connection is mediated by a synapse. There are about 100 trillion of these in your head. Each synapse connects an "upstream" neuron U with a "downstream" neuron D. Whether or not the connection U,D exists is a dynamic property of the brain. Connections are being formed and broken all the time. What's more, the strength of the connection varies due to processes like "thinking" and "experience". For example, the speed of connection depends on the existence of the myelin sheath around the axon - something that's built up or torn down depending on the dynamic "usefulness" of the connection. These connections are not "dumb" wires - each connection needs to be simulated, probably with a rather simple program, but there are trillions of them.

To make matters more interesting, the synapse is not just a dumb connector. Whether or not it will transmit a signal from U to D depends of a lot of things including the concentration and gradient of dozens of neurotransmitters in the synaptic cleft, the number of receptors for each type of neurotransmitter molecule (ready to pick up a signal from U to D) and the properties of the transmitting "upstream" part of the synapse (axial terminator). All these things are dynamic, changing thousands of times per second. Each factor depends on the others in complex ways.

Simulating all this for a synapse may be feasible, but you'd need 100 trillion powerful PC's to do it. It's hard to imagine how you would connect all these computers together but fortunately for our thought experiment, the connections are not that fast by electronic standards. The number of connections is mind boggling but we don't need to worry about the speed of the connections.

Simulating Synaptic Receptors

Optimistic authors tend to assume that such PC's would have no trouble simulating a synapse in real time (comparing the switching speed of computers to the signalling speed of neurons)  but the speed comparison needs to look at the speed of the chemical processes at the synapse, thousands of which take place in the nanosecond range simultaneously. The problem is that chemical reactions take place on a the "pica" scale, thousands of times smaller and thousands of times faster than silicon logic. We need 100 trillion PC's a thousand times faster than than any computer can be. And then there is the elephant in the room: it is by no means obvious that protein reactions can be simulated. It's a work in progress (to put it mildly). Definitely a day's work on a super computer to simulate just one reaction.

Programming and the Problem of Dynamic Non-Linear Systems

And then we need to program all this! Programming turns out to be not just hard but impossible. It's easy to imagine that the same program might work for 100 trillion synapses, but it could take decades of research to figure out how to approximately model just one synapse. At best, such research would give us a system of a few hundred dynamic non-linear equations. "Solving" such a system to predict or model behavior is known to be impossible. Things like that start to get hard with just three simple equations. Systems of equations that model change through time are called "dynamic". If the variables involved depend on each other in a non-trivial way (rates of change for example), the system is "non-linear". Almost always unsolvable, in the sense that you can't find value of all the variables that satisfy the equations.

And Then There's the Issue of Measurement

Measuring the current state of any particular synapse in a human head is also impossible for Quantum Mechanical reasons, so you have the additional problem of deciding the initial conditions for a few hundred parameters in each of 100 trillion synapses. Finding the initial conditions involves solving a set of dynamic non-linear equations which is impossible.

But Let's Not Give Up Entirely

There is no way to do this, but perhaps we can imagine a start ...

Memory elements (bits) in computers are "dead" and need to be picked up by the CPU to be processed into new memory elements (bits). Like cars that spend 99% of their time parked, almost all the "bits" in the computer sit around waiting to be funnelled through the CPU.

On the other hand, in modern computers, random access memory is refreshed thousands of times per second (it's read out and written back automatically). We could imagine a system that made the write-back a function of more than just the bit being written back. That would be referred to as a massively parallel architecture, orders of magnitude more powerful than today's fastest super computers. But still not beyond he realm of imagination. That would make the memory a "thinking" machine, constantly "unpacking" ideas at thousands of times per second. For this to work, you'd need a way to represent "ideas" in a form that could be quickly and efficiently "unpacked" into new "ideas". I can imagine a structure involving hundreds of thousands of "concepts" (English words for example) where the memory function is to write back concept B if concept A is active and B is "strongly" related to A. That would make "B" active and concepts related to "B" would be activated on the next cycle. In this picture, "A is related to B" is our synapse.

The modern Graphics Processing Unit (GPU) is the kind of parallel processor we need but we need one a few billion times bigger (same size, more capacity), thousands of times faster and with the ability to program itself on the fly. Then all we need to do is figure out what an "idea" is ...

But Maybe We Can Design a Better Brain

The one thing that keeps the "strong AI" idea alive is that a human-designed brain may turn out to be many thousands of times more efficient than its meat counterpart. We shouldn't need to simulate the actual brain to produce the "mind". If this is so, we need to understand the architecture of the mind itself (independent of how it "runs" on meat). To put it kindly, this process is in its infancy. Most AI research uses brute force to solve practical problems and has no interest in how the human mind actually works. (Hofstader hardly attends AI conferences any more).

For now, we are stuck with the meat computer between our ears which apparently zips along much faster than 100 trillion super computers. If you ask me, the singularity is not as close as Ray Kurzweil imagines.

The Blue Brain Project

The Blue Brain project is an attempt to model a tiny brain based on actual data on the brain structure of a rat. It is not even an attempt to model the entire rat's brain - just a patch of its tiny neocortex.

Artist's conception-Fractal Cortical Column
The project shows the sweeping simplifications and assumptions that are required, along with the vast computer facilities to model even these assumptions. The project investigates an intermediary structure, larger than the neuron and smaller than brain modules such as the visual cortex. This is the "cortical column". There is no general definition of what, exactly, this "column" is, but the outer surfaces of the brain (neocortex) seem to have a consistent structure of vertically associated neurons in "vertical" columns (stacked inwards from the surface). The "wiring" within the column and between columns is not random (widely considered to be a fractal structure - each column being daughters of a single stem cell). It has long been recognized that the cortical column is promising both as a subject of study (how does it work, what does it do?) and a subject of computer modelling if you happen to own a super computer. 

The Blue Brain Project seems to be the ultimate evidence that Hofstader's line of investigation (at the "meme" level) is sadly far from the mainstream, even though his model of the mind is, so far, the most successful. "Blue Brain" is working on the reductionist assumption that understanding of the mind will "emerge" from a sufficiently detailed understanding of the "fundamental" aspects of the brain, just like Quantum Mechanics is supposed to be a "Theory of Everything". The hopelessness of this idea is illustrated by the goals of the project, which attempt to model a tiny patch of cortical columns based on the brain structure of a rat.

Literature on the cortical column seems to focus on brain processes at the very origin of perception (such as the visual cortex). The assumption that the same processes are involved in (for example) formation of new concepts is so far not justified by the research.

[1] Recent reports indicate that the Blue Brain project is not going well. Markram pitches the project is here as a TED project. He is stunningly naive about how his "top down" idea of how the brain works. He parades the reductionist assumptions of how this can be "unpacked" by simulating it all on the computer. His talk illustrates a popular technique of "brain talk". Markram talks as if "columns" are well defined, let alone understood, just as others talk about "neurons" as some kind of "explanation" of thought. He mentions "10 million synapses", somehow ignorant of he fact that there are 10 trillion. He lies about "having the math" to describe neurons with a "handful" of equations. He lies by creating the impression that having such equations amounts to solving them. He lies about attempting a "real time" simulation. His bottom line: "It's not impossible to build a human brain and we will do it in 10 years".

Comments

Popular posts from this blog

Panic Part 6 - The IPCC Summary for Policymakers

The Carbon Offset Hoax

Dennis Hoffman and The Nature of Reality