Artificial intelligence in the modern era typically involves enormous machines. Somewhere in northern Sweden or Nevada, windowless data centers are humming. GPU racks are consuming power like a tiny metropolis. It’s a striking picture, but it seems strangely disconnected from the biological brain that quietly resides inside a human skull. Some neuroscientists appear to be troubled by that contrast.

Benjamin Cowley, an assistant professor at Cold Spring Harbor Laboratory, and partners from Princeton University and Carnegie Mellon University have adopted a different approach. Rather than constructing ever-larger AI systems, they attempted to compress one until it started to resemble a biological brain. The end product is a vision model that can fit inside an email attachment, which is nearly ridiculously small. The intelligence originated from the peculiar twist. Not people. monkeys.
| Field | Information |
|---|---|
| Key Researcher | Benjamin Cowley |
| Institution | Cold Spring Harbor Laboratory |
| Collaborating Institutions | Carnegie Mellon University, Princeton University |
| Species Studied | Macaque monkeys |
| Focus Area | Visual cortex neurons (V4) |
| Original Model Size | ~60 million variables |
| Compressed Model Size | ~10,000 variables |
| Published In | Nature |
| Scientific Goal | Understanding how biological brains process images |
| Reference | https://www.nature.com |
Neural recordings from macaque monkeys, a species whose visual systems resemble our own remarkably, served as the team’s starting point. The animals were shown carefully selected sets of images, such as birds, donuts, rubber ducks, plush toys, and wading birds, in carefully monitored experiments. not arbitrary images. images selected because they cause intense activity in the V4 visual cortex, a specific area of the brain.
It’s oddly illuminating to watch those neurons fire. Some cells react to textures and colors. Some glow at curves. Patterns that resemble piles of fruit in a grocery store display cause some people to react. Oranges and apples. Bending bananas against one another. It’s difficult to ignore the brain’s capacity for specificity.
To forecast how these neurons would respond to each image, Cowley’s team trained a sizable AI model. The system initially appeared to be heavy, complex, and packed with millions of tunable parameters, just like any other deep neural network. Specifically, about 60 million variables. This kind of architecture typically requires a significant amount of processing power. The intriguing part, however, came later.
The researchers started squeezing the model rather than expanding it. utilizing statistical methods akin to those that reduce the size of digital images to compress it. Redundant parts vanished. Connections broke down into more basic structures. The machine intelligence folded inward slowly.
The model eventually had only 10,000 variables. That is about a thousandth of the original system’s size.Strangely enough, though, it continued to function.
That outcome has a subtle implication. Contemporary AI systems frequently believe that intelligence necessitates enormous networks and limitless processing power. However, with far less machinery, the primate brain, which uses roughly the same amount of electricity as a lightbulb, accomplishes something far more remarkable.
As this research progresses, there’s a growing belief that biology might have found a solution to the efficiency issue long before Silicon Valley realized it.
The compact model is intriguing because, after simplification, it becomes transparent. The researchers were able to observe the internal workings of the system because there were fewer artificial neurons to examine. With today’s massive AI models, that is rarely feasible. Patterns started to show up.
Many artificial neurons separated images into their constituent elements, such as edges, colors, and curves, and then reassembled them in slightly different ways. An unexpectedly graceful procedure. One that reflects the behavior of actual neurons in biological vision systems.
Curved shapes, like those found in a pile of vegetables or a fruit display, were preferred by some artificial neurons. Others appeared to be fixated on tiny dots. The researchers were unprepared for that final detail.
But when you consider it, it makes sense. Primates’ eyes are basically dots. tiny, dark circles with deep social significance. It’s possible that the brain developed specific detectors just to identify them. A neuron may be silently waiting for dots somewhere in your brain right now. It’s an odd idea.
However, this work’s true potential might be hidden from view. These kinds of models, according to Cowley and his colleagues, could aid researchers in simulating neurological disorders, particularly those that impair neural connections.
Consider Alzheimer’s disease. Researchers are aware that as the illness worsens, synapses progressively disappear, impairing neuronal communication. One day, a simplified AI model may provide insights into how to reestablish lost connections if it can identify which visual cues promote neuronal connections.
Neural pathways may be strengthened by specific images or stimuli, though this is still unknown. It sounds almost unreal. using thoughtfully created visual experiences to treat brain disorders.
However, there is a history of bizarre discoveries in neuroscience that eventually find their way into standard medicine.
The study also raises unsettling issues regarding artificial intelligence in general outside of academic circles. Perhaps today’s AI systems are just… ineffective if primate brains use compact networks to achieve powerful perception. bloated machines that find difficult solutions.
Mitya Chklovskii of the Flatiron Institute is among the scientists who believe that contemporary AI is still based on antiquated theories about how the brain works. Simplified neuron models from the 20th century, long before contemporary neuroscience mapped the intricate structure of actual neural circuits, are largely responsible for deep learning.
This discrepancy may help to explain why AI still has trouble with such variations while humans can quickly identify a friend, even in low light or with a new haircut. There’s something crucial missing.
Cowley’s tiny AI brain is remarkable for how small it appears in comparison to contemporary systems. No expansive server farms. No models with a trillion parameters. Just a condensed network modeled after the neurons of monkeys. Not too big to send via email.
It’s still unclear if this strategy will result in improved AI or more profound understanding of the human brain. Seldom does science proceed in a linear fashion. However, as this new generation of biologically inspired models emerges, there’s a subtle sense that smaller machines, rather than larger ones, may be the source of the next big advancement in intelligence.
