Skip to main content

An AI System Spontaneously Develops Baby-Like Ability to Gauge Big and Small

On its own, the neural network seems to recap a process experienced by human infants

Training software that emulates brain networks to identify dog breeds or sports equipment is by now old news. But getting such an AI network to learn a process on its own that is innate to early child development is truly novel. In a paper published Wednesday in Science Advances, a neural network distinguished between different quantities of things, even though it was never taught what a number is.

The neural net reprised a cognitive skill innate to human babies, monkeys and crows, among others. Without any training, it suddenly could tell the difference between larger and smaller amounts—a skill called numerosity, or number sense. Many believe number sense is an essential precursor to our ability to count and do more complex mathematics. But questions have persisted about how this ability spontaneously comes about in the young brain.

To research its development, scientists from the University of Tübingen in Germany used a deep-learning system designed to mimic the human brain to see if numerosity would emerge without having to train the software. “We were trying to simulate the workings of the visual system of our brain by building a deep-learning network, an artificial neural network,” says Andreas Nieder, a professor in the Institute of Neurobiology at Tübingen and senior author on the new paper. “The big question was, how is it possible that our brain and the brain of animals can spontaneously represent the number of items in a visual scene?”


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The researchers first trained the network on a standard data set of 1.2 million images that fell into 1,000 different categories. Eventually, the system—like many before it—was able to identify pictures of animals and insects, not only as a dog or a spider but as the specific breed of miniature schnauzer or wolf spider.

Next, the researchers showed the neural network images that contained just patterns of white dots on a black background to represent the numbers one through 30. Without being taught anything about numbers or being told to look for differences in quantity, the system was able to classify each image by the number of dots in it. “When you train neural networks that look like the visual system to do tasks like object recognition, it learns other things for free,” says James DiCarlo, a professor in the department of brain and cognitive sciences at MIT who was not involved in the research. “What's cool about this study is they're measuring things that are probed by vision but are usually not thought of as visual things purely, like numerosity.”

Nieder’s team uses a deep-learning system that mimics the human brain, with “neurons” that both receive input from neurons high up in the system and send that information down the line. Certain neurons “fire” in response to specific stimuli based on their features or patterns.

Using this model, Nieder compared activation of the network’s neurons with neurons in the brains of monkeys that were shown the same dot patterns. The artificial neurons behaved exactly like the neurons in the visual processing area of the animal brains, with preferences for and tuning to a specific number. For instance, a neuron that preferred the number six had the highest activation level whenever six dots were shown. It fired a little less in response to five and seven dots, less again for four and eight, and so on. The neuron’s activity dropped off continuously as the stimulus got further away from its target number.

“This was very exciting for us to see because these are exactly the types of responses that we see in real neurons in the brain,” says Nieder. “This could be an explanation that the wiring of our brain, of our visual system at the very least, can give rise to representing the number of objects in a scene spontaneously.”

The neural network even made similar mistakes to ones that a human brain made. It had more difficulty distinguishing between numbers that were closer together, like four and five, than ones that were farther apart, like four and nine. It also struggled with distinguishing larger numbers (20 and 25), compared with smaller ones that are the same distance apart (one and six), just like humans do.

Véronique Izard, a research scientist at Université Paris Descarte who studies mathematical thinking, wrote in an e-mail that the authors, “provide a mechanistic explanation for the emergence of sensitivity to numerosity.” She says that this suggests numerosity is not evolutionarily selected for in and of itself, but rather “emerged spontaneously, as a by-product of learning to recognize objects.”

Not every scientist was equally impressed. Peter Gordon, an associate professor of neuroscience and education at Columbia University, says that neural networks are making assessments “based on low-level visual information, like lines and angles and shading and stuff like that. So, if you’re applying that to these quantities, what’s happening is it’s just showing that several pictures of 10 dots are more similar than several pictures of eight dots. It’s not really picking up number.”

For his part, Nieder insists this type of neural network provides a better model of the human brain. “We can now have hypotheses about how things happen in the brain and can go back and forth from artificial networks to real networks,” he says. “That's I think one of the big advantages of these networks for basic science.”

Dana Smith is a freelance science writer specializing in brains and bodies. She has written for Scientific American, the Atlantic, the Guardian, NPR, Discover, and Fast Company, among other outlets. In a previous life, she earned a Ph.D. in experimental psychology from the University of Cambridge.

More by Dana G. Smith