Cortical Labs Plugs Human Brain Cells Into an LLM After They Mastered DOOM

Just when you thought the AI hype cycle couldn’t get any more surreal, a company in Australia decided to ditch the GPUs and plug an AI into a living, biological brain. Well, sort of. Cortical Labs, the biotech firm that previously taught a dish of about 800,000 human neurons to play the classic video game Pong, has moved on to bigger and better things. After successfully training a new batch of 200,000 neurons to navigate the demon-infested halls of DOOM, they’ve now wired their “DishBrain” into a Large Language Model (LLM).

That’s right. Real, living human brain cells, firing electrical impulses on a silicon chip, are now choosing the words an AI speaks. This isn’t just another incremental step in machine learning; it’s a bizarre, fascinating, and slightly unsettling leap into the world of “wetware” and biological computing. And frankly, it makes your average chatbot look about as advanced as a pocket calculator.

From Pixelated Paddles to Hellish Landscapes

To understand how we got to a point where brain cells are co-authoring text, we have to look back at Cortical Labs’ greatest hits. In 2022, the Melbourne-based team made headlines with their “DishBrain” experiment. They grew neurons on a microelectrode array that could both stimulate the cells and read their activity. By sending electrical signals to indicate the position of the ball in Pong, the neurons learned to fire in a way that controlled the paddle, demonstrating goal-directed learning in just five minutes. It was a stunning proof-of-concept for synthetic biological intelligence.

But Pong is child’s play. The tech world has a long-standing mantra for judging new hardware: “Can it run DOOM?” So, naturally, that’s what Cortical Labs did next. The jump from the simple 2D world of Pong to the 3D environment of DOOM is immense, requiring spatial navigation, threat detection, and decision-making. Yet, the neurons learned. The game’s video feed was translated into patterns of electrical stimulation, and the neurons’ responses were decoded into in-game actions like moving and shooting. While the performance was more like a fumbling beginner than a seasoned pro, it proved the system could handle far more complex, dynamic tasks.

Giving an LLM a Biological Ghost in the Machine

Having conquered classic video games, the next logical step was apparently to give the neurons a voice. The latest experiment, showcased by figures like tech evangelist Robert Scoble, reveals the brain cells interfaced with an LLM. Instead of moving a paddle or a space marine, the electrical impulses fired by the neurons are now used to select each token—be it a letter or a word—that the AI generates.

A sneak peek video shows the process in action: a grid displays the channels being stimulated and the corresponding feedback from the neurons as they collectively “decide” on the next piece of text. It’s a raw, unfiltered look at biological matter performing a cognitive task that, until now, has been the exclusive domain of complex algorithms running on power-hungry silicon.

“We have shown we can interact with living biological neurons in such a way that compels them to modify their activity, leading to something that resembles intelligence,” stated Dr. Brett Kagan, Chief Scientific Officer of Cortical Labs, regarding their earlier work.

This new development takes that interaction to a whole new level. It’s one thing to react to a bouncing ball; it’s another entirely to participate in the construction of language.

Why Bother With Brains?

At this point, you might be asking: why go through the trouble of keeping 200,000 neurons alive in a dish when a high-end GPU can run an LLM just fine? The answer lies in efficiency and the fundamental limits of silicon. The human brain performs staggering computations on about 20 watts of power—the equivalent of a dim lightbulb. A supercomputer attempting to simulate the same activity can require millions of times more energy.

Cortical Labs and others in the field are betting that this incredible energy efficiency can be harnessed. Biological systems excel at parallel processing and adaptive learning in ways that traditional computers, which are deterministic and binary, struggle to replicate. By merging living neurons with silicon, they are creating a hybrid computing architecture that could one day power systems that learn faster and consume a fraction of the energy.

This isn’t just about building a better chatbot. The team at Cortical Labs, led by CEO Dr. Hon Weng Chong, sees a future where this technology revolutionizes robotics, personalized medicine, and drug discovery. Imagine a robot that doesn’t just execute pre-programmed commands but learns and adapts to a new environment with the fluid intelligence of a biological system. Or consider using a patient’s own neurons on a chip to test the effectiveness of different drugs for neurological conditions like epilepsy.

The road ahead is long. Biological systems are complex and can be unpredictable, a far cry from the reliable consistency of silicon. But as Cortical Labs has shown, a cluster of cells in a dish has already gone from playing video games to speaking. The prospect of these same neurons one day controlling a robot is no longer just science fiction—it’s the next item on the roadmap. And that is a thought that is simultaneously terrifying and exhilarating.