Skip to main content

Our Ability to Keep 'em Guessing Peaks around Age 25

When it comes to abilities like outwitting foes, younger people are simply better at responding in novel situations

The brain processes sights, sounds and other sensory information—and even makes decisions—based on a calculation of probabilities. At least, that’s what a number of leading theories of mental processing tell us: The body’s master controller builds an internal model from past experiences, and then predicts how best to behave. Although studies have shown humans and other animals make varied behavioral choices even when performing the same task in an identical environment, these hypotheses often attribute such fluctuations to “noise”—to an error in the system.

But not everyone agrees this provides the complete picture. After all, sometimes it really does pay off for randomness to enter the equation. A prey animal has a higher chance of escaping predators if its behavior cannot be anticipated easily, something made possible by introducing greater variability into its decision-making. Or in less stable conditions, when prior experience can no longer provide an accurate gauge for how to act, this kind of complex behavior allows the animal to explore more diverse options, improving its odds of finding the optimal solution. One 2014 study found rats resorted to random behavior when they realized nonrandom behavior was insufficient for outsmarting a computer algorithm. Perhaps, then, this variance cannot simply be chalked up to mere noise. Instead, it plays an essential role in how the brain functions.

Now, in a study published April 12 in PLoS Computational Biology, a group of researchers in the Algorithmic Nature Group at LABORES Scientific Research Lab for the Natural and Digital Sciences in Paris hope to illuminate how this complexity unfolds in humans. “When the rats tried to behave randomly [in 2014],” says Hector Zenil, a computer scientist who is one of the study’s authors, “researchers saw that they were computing how to behave randomly. This computation is what we wanted to capture in our study.” Zenil’s team found that, on average, people’s ability to behave randomly peaks at age 25, then slowly declines until age 60, when it starts to decrease much more rapidly.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To test this, the researchers had more than 3,400 participants, aged four to 91, complete a series of tasks—“a sort of reversed Turing test,” Zenil says, determining how well a human can outcompete a computer when it comes to producing and recognizing random patterns. The subjects had to create sequences of coin tosses and die rolls they believed would look random to another person, guess which card would be drawn from a randomly shuffled deck, point to circles on a screen and color in a grid to form a seemingly random design.

The team then analyzed these responses to quantify their level of randomness by determining the probability that a computer algorithm could generate the same decisions, measuring algorithmic complexity as the length of the shortest possible computer program that could model the participants’ choices. In other words, the more random a person’s behavior, the more difficult it would be to describe his or her responses mathematically, and the longer the algorithm would be. If a sequence were truly random, it would not be possible for such a program to compress the data at all—it would be the same length as the original sequence.

After controlling for factors such as language, sex and education, the researchers concluded age was the only characteristic that affected how randomly someone behaved. “At age 25, people can outsmart computers at generating this kind of randomness,” Zenil says. This developmental  trajectory, he adds, reflects what scientists would expect measures of higher cognitive abilities to look like. In fact, a sense of complexity and randomness is based on cognitive functions including attention, inhibition and working memory (which were involved in the study’s five tasks)—although the exact mechanisms behind this relationship remain unknown. “It is around 25, then, that minds are the sharpest.” This makes biological sense, according to Zenil: Natural selection would favor a greater capacity for generating randomness during key reproductive years.

The study’s results may even have implications for understanding human creativity. After all, a large part of being creative is the ability to develop new approaches and test different outcomes. “That means accessing a larger repository of diversity,” Zenil says, “which is essentially randomness. So at 25, people have more resources to behave creatively.”

Zenil’s findings support previous research, which also showed a decline in random behavior with age. But this is the first study to employ an algorithmic approach to measuring complexity as well as the first to do so over a continuous age range. “Earlier studies considered groups of young and older adults, capturing specific statistical aspects such as repetition rate in very long response sequences,” says Gordana Dodig-Crnkovic, a computer scientist at Mälardalen University in Sweden, who was not involved in the research. “The present article goes a step further.” Using algorithmic measures of randomness, rather than statistical ones, allowed Zenil’s team to examine true random behavior instead of statistical, or pseudorandom, behavior—which, although satisfying statistical tests for randomness, would not necessarily be “incompressible” the way truly random data is. The fact that algorithmic capability differed with age implies the brain is algorithmic in nature—that it does not assume the world is statistically random but takes a more generalized approach without the biases described in more traditional statistical models of the brain.

These results may open up a wider perspective on how the brain works: as an algorithmic probability estimator. The theory would update and eliminate some of the biases in statistical models of decision-making that lie at the heart of prevalent theories—prominent among them is the Bayesian brain hypothesis, which holds that the mind assigns a probability to a conjecture and revises it when new information is received from the senses.  “The brain is highly algorithmic,” Zenil says. “It doesn’t behave stochastically, or as a sort of coin-tossing mechanism.” Neglecting an algorithmic approach in favor of only statistical ones gives us an incomplete understanding of the brain, he adds. For instance, a statistical approach does not explain why we can remember sequences of digits such as a phone number—take “246-810-1214,” whose digits are simply even counting numbers: This is not a statistical property, but an algorithmic one. We can recognize the pattern and use it to memorize the number. 

Algorithmic probability, moreover, allows us to more easily find (and compress) patterns in information that appears random. “This is a paradigm shift,” Zenil says, “because even though most researchers agree that there is this algorithmic component in the way the mind works, we had been unable to measure it because we did not have the right tools, which we have now developed and introduced in our study.”

Zenil and his team plan to continue exploring human algorithmic complexity, and hope to shed light on the cognitive mechanisms underlying the relationship between behavioral randomness and age. First, however, they plan to conduct their experiments with people who have been diagnosed with neurodegenerative diseases and mental disorders, including Alzheimer’s and schizophrenia. Zenil predicts, for example, that participants diagnosed with the latter will not generate or perceive randomness as well as their counterparts in the control group, because they often make more associations and observe more patterns than the average person does.

The researchers’ colleagues are standing by. Their work on complexity, says Dodig-Crnkovic, “presents a very promising approach.”