Brain Model in a Nutshell

"Growing" Brain Tissue

A key insight is to take into account the fractal nature of how the brain grows (like any other tissue) from a single stem cell. Model structure should reflect the way cells grow and connect rather than relying on mathematical structures like matrices. Connections look random but have an underlying fractal structure. On the other hand, the model needs to reflect the well known principle that "cells that fire together wire together". The fractal structure "explains" this rule in a physical way (cortical columns) but the brain can wire itself up in a way that might be seen as 4 dimensional. Connections are formed due to "closeness" in space as well as some kind of time function. 

The model should assume some level of continuous growth. In the brain, new neurons are being created and others are dying all the time. It's not known how these new neurons find their place in the structures of the brain.

Similarly, we know that the brain is, to some extent, "plastic". It allows large-scale re-wiring, probably between columns rather than within them.

Pre-frontal cortex and the "idea"

The prefrontal cortex may work just like the specialized areas of the brain, but feed back on itself to create recursive, fractal "layers" on the fly, at approximately "brain wave" frequency. We know that this process can feed back into the specialized cortex to re-use these circuits to present "ghost" images (like visual memory). Perhaps this is what we mean by an "idea" and account for how these ideas seem to flow in the experience we call consciousness. The rate that these images are formed (brain wave frequency) is interestingly similar to the frame rate of a movie or the perceptible range of audio frequency.

What would be the model of "stimulation"?

Using the example of the retina, a cell would consider itself to be "stimulated" as a function of stimulation in nearby (connected) cells. Connection would follow the fractal patter, with daughter cells of the same stem cell being considered "close" as well as those physically in contact (taking into account direct cell-to-cell communication present in all tissues). Direct stimulation would take place only at one "layer". Would that be layer zero or layer K? Or should we consider the idea of stimulation to define layer zero?

Cell logic structure

Signalling between cells must include the concept of suppression: a signal from a to b may reduce the probability that b becomes "excited'. In general, the "method" in the cell that determines to signal the event "I am excited should involve the range of simple logical constructs (and, or, nor, not) along with dynamic versions of these functions (sum over time). In short, triggering will result from exceeding a threshold described in a set of discreet non-linear functions. Inputs to these functions would be internal variables reflecting  stored "awareness" of events in connected cells. Perhaps we can assume that "time" resets to zero when the cell "fires" or at least assume the cell is in one of two discreet states: firing and not firing.

The set of equations determining whether the cell "fires" or not can be thought of as the model for the cell's "output" axons. The "input" (right hand side of the equations) represent a model of the cells dendrites (inputs). The actual set of equations (the way all these variables are related) can be thought of as the model for the neuron's internal "logic".

Some provision should be made for "noise". In other words, "firing" should not be deterministic, but a result of increasing or decreasing probability.

We may save ourselves the trouble of modelling a "not firing" event by allowing the corresponding "receiver" internal variable to "decay", requiring occasional refresh ("still firing").

The State of the Cell

With each "tick of the clock", the cell proceeds from one state to the next, applying the state model to produce a new state (list of internal variables). At the end of the "tick", it signals the "firing" event or not, then "decays" the state variables according to some internal rule. It then "goes to sleep" unless "woken up" at the next tick or notified of the need to change a state variable due to the external event of the corresponding neuron having fired. Of course, this can be generalized simply by treating the "tick" of the clock as itself an "event".

This is not particularly original concept. It's a simple-seeming idea but nasty to "debug". The event-driven design concepts are drawn from the "tool box" used by the computer's BIOS. For performance reasons, programming should at least provide for an ultimate version that binds closely to the actual computer BIOS, using the actual "tick" of the computer's clock. Obviously, we can't imagine triggering millions of cell events at each computer "tick" (which happens in the nanosecond order of magnitude). However, our model should provide for the possibility that the lists of listeners is partitioned between CPU's and that the cells can refresh themselves independently, possibly using or sharing different CPU's. At the lowest level, then, the model should not assume that everything happens on one CPU, nor that the event list is shared between all CPU's. All state data should be stored within the cell or the event list.

Cell "logic" should be kept simple to provide for direct implementationi in silicon. This would permit mass production of cells for a given "tissue type".

Signalling and events

"Firing" of a cell as an event picked up by "listeners". Ultimately, some cells will "fire" as a result of direct, outside stimulation.The synapse (connecting neuron A to B) would be modelled by the the list of cells Bi listening for firing events from cells Ai. Physical implementation constraints would be proportional to the capacity to store these lists. Dynamic performance would be determined by the refresh rate of the lists (how quickly "events" can be picked up to fire a new set of events). Ideally, the refresh rate should be fast enough so that there are "long" periods where "few" events take place. Cycling between "few" and "lots" of events should mirror brain wave frequency (order of 10 to 100 Hz).

Random "wiring"

How do we model the possibility of virtually any neuron connecting with any other if they "fire together"? This would imply dynamically adding A to the list of "listeners" to the "B fired" event. The initial list of "listeners" should be created as a result of fractal growth from a daugher cell. After that, listeners could be added or dropped as a result of a different rule ("Experiece"). Perhaps we could feed the stream of event states into a (hopfully fast) external machine that would detect correlation using old fashioned statistical methods, then feed back a stream of "suggested" new connections (listener pairs A, B). This would, in fact, simulate "learning".

Qualitative and Quantitative Measures

Even if not implemented in silicon, dynamic properties of the model could be determined and compared to what is known about the brain, such as the way cells signal, the number of cells, the size and structure of cortical columns. "Order of magnitude" estimates could give us an idea as to whether this type of model can tell us anything about how the brain works. Assuming that the brain is more complex that this model, perhaps we can bracket brain complexity, performance and the types of brain functions that can be implemented in silicon.

How can "memory" be Modelled?

I assume that "memory" is implemented in tissues similar to what is discussed above. In particular, there internal state is persistent. Their output, however, is assumed to feed back to produce "ghost" perception. At first glance, this behaviour could be simulate simply by a longer decay of internal state variables, simulating a persistent neuron-to-neuron connection (axon to dendrite). Again, such connections would be richer within daughter cells. We can't assume that the column structure applies since brain structures such as the hippocampus (known to be involved in memory) are not cortex tissues. At first blush, the "wiring" may be simulated by a fractal structure applying to one big "blob" of "tissue" called "memory".

Overall Structure

All of the above hints at a structure of similar tissues arising as variations on a theme, with every region differing mainly in the persistence of internal state variables, fractal "wiring" and the degree of looping (recursive) "wiring". Frontal cortex is "loopy" with state decaying rapidly unless refreshed. Sensory cortex is much less "loopy" but fast decaying. Connection is highly structured along fractal lines. Memory is extensively connected to everything with slow internal decay rate for internal state.

Comments

Popular posts from this blog

Facebook and Bing - A Killer Combination

A Process ...

Warp Speed Generative AI