Xlera8

When Does the Brain Operate at Peak Performance?

Introduction

Over the last few decades, an idea called the critical brain hypothesis has been helping neuroscientists understand how the human brain operates as an information-processing powerhouse. It posits that the brain is always teetering between two phases, or modes, of activity: a random phase, where it is mostly inactive, and an ordered phase, where it is overactive and on the verge of a seizure. The hypothesis predicts that between these phases, at a sweet spot known as the critical point, the brain has a perfect balance of variety and structure and can produce the most complex and information-rich activity patterns. This state allows the brain to optimize multiple information processing tasks, from carrying out computations to transmitting and storing information, all at the same time.

To illustrate how phases of activity in the brain — or, more precisely, activity in a neural network such as the brain — might affect information transmission through it, we can play a simple guessing game. Imagine that we have a network with 10 layers and 40 neurons in each layer. Neurons in the first layer will only activate neurons in the second layer, and those in the second layer will only activate those in the third layer, and so on. Now, I will activate some number of neurons in the first layer, but you will only be able to observe the number of neurons active in the last layer. Let’s see how well you can guess the number of neurons I activated under three different strengths of network connections.

First, let’s consider weak connections. In this case, neurons typically activate independently of each other, and the pattern of network activity is random. No matter how many neurons I activate in the first layer, the number of neurons activated in the last layer will tend toward zero because the weak connections dampen the spread of activity. This makes our guessing game incredibly difficult. The amount of information about the first layer that you can learn from the last layer is practically nothing.

Next, let’s consider strong connections — surely this setup will transmit information well? Actually, it won’t. When one strongly connected neuron is active, it activates multiple other neurons, spreading activity until nearly all the neurons in the final layer are active. Activity gets through, but this saturation does not let you accurately guess whether I activated one neuron in the first layer or all 40. The amplification has washed away most of that information.

Finally, let’s consider the intermediate “critical” case, where the number of connections lies between the previous two examples. We avoid the pitfalls of being excessively damped or amplified, and the number of neurons activated is roughly preserved across layers. If I activate 12 neurons in the first layer, you might see anywhere from nine to 15 neurons active in the last layer. You could deduce the number I activated — not perfectly, but at least somewhat accurately.

We can precisely quantify this ability to guess better as a measure of information transmission. If I picked a number from 1 to 40 and you asked “Is it less than 20?” and I replied yes, you would have cut the range of your guesses in half. This reduction of uncertainty is equivalent to one bit of information. You could cut the range in half again and gain another bit of information by asking “Is it greater than 10?” At the critical point, you can more accurately guess what the stimulus was, so it’s possible to transmit more bits of information.

The same sense of a critical brain being “just right” also explains why other tasks should be optimized. For example, consider information storage, which is driven by the activation of groups of neurons called assemblies. In a subcritical network, the connections are so weak that very few neurons are coupled together, so only a few small assemblies can form. In a supercritical network, the connections are so strong that almost all neurons are coupled together, which allows only one large assembly. In a critical network, the connections are strong enough for many moderately sized groups of neurons to couple, yet weak enough to prevent them from all coalescing into one giant assembly. This balance leads to the largest number of stable assemblies, maximizing information storage.

And this is not just theory or simulation: Experiments both on isolated networks of neurons and in intact brains have upheld many of these predictions. Further, we have seen these benefits arise across many different species, in turtles, cats and even humans. Most of these studies have focused on the outer part of the brain, known as the cortex, although some have included subcortical regions as well. Overall, the studies have shown that these networks operate near the critical point.

Despite the ubiquity of this phenomenon, it is possible to disrupt it. For example, when one eye of a rat is covered, its visual cortex is pushed away from the critical point and transmits information more erratically. (The cortex seems to adjust to this change and spontaneously returns to the critical point after two days.) Similarly, when humans are sleep deprived, their brains become supercritical, although a good night’s sleep can move them back toward the critical point. It thus appears that brains naturally incline themselves to operate near the critical point, perhaps just as the body keeps blood pressure, temperature and heart rate in a healthy range despite changes to the environment. This insight is important for understanding neurological health: New research has suggested that brain diseases like epilepsy are associated with failure to operate near the critical point or to return to it once pushed away.

So, why is this view of the critical brain still just a hypothesis? While the evidence in its favor is good, it’s still under discussion. The claim that the cortex operates near the critical point is a sweeping one, encompassing optimal information processing, neurological health and a nearly universal application across species. The need for strong scrutiny is not surprising.

Early critiques pointed out that proving a network was near the critical point required improved statistical tests. The field responded constructively, and this type of objection is rarely heard these days. More recently, some work has shown that what was previously considered a signature of criticality might also be the result of random processes. Researchers are still investigating that possibility, but many of them have already proposed new criteria for distinguishing between the apparent criticality of random noise and the true criticality of collective interactions among neurons.

Meanwhile, over the past 20 years, research in this area has steadily become more visible. The breadth of methods being used to assess it has also grown. The biggest questions now focus on how operating near the critical point affects cognition, and how external inputs can drive a network to move around the critical point. Ideas about criticality have also begun to spread beyond neuroscience. Citing some of the original papers on criticality in living neural networks, engineers have shown that self-organized networks of atomic switches can be made to operate near the critical point so that they compute many functions optimally. The deep learning community has also begun to study whether operating near the critical point improves artificial neural networks.

The critical brain hypothesis may yet prove to be wrong, or incomplete, although current evidence does support it. Either way, the understanding it provides is generating an avalanche of questions and answers that tell us much more about the brain — and computing generally — than we knew before.

Chat with us

Hi there! How can I help you?