Human Brain’s Biological Algorithm Mirrors the Internet –“May Not Be a Coincidence” (VIDEO)

 

20150711.coords_1000x800 (1)

In 2013, Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, observed that consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. Why? Because that’s the way the universe works.


“The electric charge of an electron doesn’t arise out of more elemental properties. It simply has a charge,” says Koch. “Likewise, I argue that we live in a universe of space, time, mass, energy, and consciousness arising out of complex systems.”

Deciding how to route information fairly and efficiently through a distributed system with no central authority was a priority for the Internet's founders. Now, a Salk Institute for Biological Studies discovery shows that an algorithm used for the Internet is also at work in the human brain, an insight that improves our understanding of engineered and neural networks and potentially even learning disabilities.

Navlakha, who develops algorithms to understand complex biological networks, wondered if the brain, with its billions of distributed neurons, was managing information similarly. So, he and coauthor Jonathan Suen, a postdoctoral scholar at Duke University, set out to mathematically model neural activity.

The computer-generated image of internet connections world-wide -the Global Brain)- is shown above. The conceptual similarities with the human brain are remarkable. Both networks exhibit a scale-free, fractal distribution, with some weakly-connected units, and some strongly-connected ones which are arranged in hubs of increasing functional complexity. This helps protect the constituents of the network against stresses. Both networks are “small worlds” which means that information can reach any given unit within the network by passing through only a small number of other units. This assists in the global propagation of information within the network, and gives each and every unit the functional potential to be directly connected to all others. (The Opte Project/Barrett Lyon)

 

 

In the engineered system, the solution involves controlling information flow such that routes are neither clogged nor underutilized by checking how congested the Internet is. To accomplish this, the Internet employs an algorithm called "additive increase, multiplicative decrease" (AIMD) in which your computer sends a packet of data and then listens for an acknowledgement from the receiver: If the packet is promptly acknowledged, the network is not overloaded and your data can be transmitted through the network at a higher rate. With each successive successful packet, your computer knows it's safe to increase its speed by one unit, which is the additive increase part.

But if an acknowledgement is delayed or lost your computer knows that there is congestion and slows down by a large amount, such as by half, which is the multiplicative decrease part. In this way, users gradually find their "sweet spot," and congestion is avoided because users take their foot off the gas, so to speak, as soon as they notice a slowdown. As computers throughout the network utilize this strategy, the whole system can continuously adjust to changing conditions, maximizing overall efficiency.

Because AIMD is one of a number of flow-control algorithms, the duo decided to model six others as well. In addition, they analyzed which model best matched physiological data on neural activity from 20 experimental studies. In their models, AIMD turned out to be the most efficient at keeping the flow of information moving smoothly, adjusting traffic rates whenever paths got too congested. More interestingly, AIMD also turned out to best explain what was happening to neurons experimentally.

It turns out the neuronal equivalent of additive increase is called long-term potentiation. It occurs when one neuron fires closely after another, which strengthens their synaptic connection and makes it slightly more likely the first will trigger the second in the future. The neuronal equivalent of multiplicative decrease occurs when the firing of two neurons is reversed (second before first), which weakens their connection, making the first much less likely to trigger the second in the future. This is called long-term depression. As synapses throughout the network weaken or strengthen according to this rule, the whole system adapts and learns.

"While the brain and the Internet clearly operate using very different mechanisms, both use simple local rules that give rise to global stability," says Suen. "I was initially surprised that biological neural networks utilized the same algorithms as their engineered counterparts, but, as we learned, the requirements for efficiency, robustness, and simplicity are common to both living organisms and the networks we have built."

Understanding how the system works under normal conditions could help neuroscientists better understand what happens when these results are disrupted, for example, in learning disabilities. "Variations of the AIMD algorithm are used in basically every large-scale distributed communication network," says Navlakha. "Discovering that the brain uses a similar algorithm may not be just a coincidence."

The Daily Galaxy via Salk Institute and www.wired.com

"The Galaxy" in Your Inbox, Free, Daily