When Biology Becomes the Computer
Cortical Labs just grew 200,000 human neurons on a chip and trained them to play Doom. The implications run far deeper than the headline.
There are technological moments that feel incremental — a faster chip, a cheaper sensor, a slightly better algorithm. And then there are moments that quietly reorder the intellectual landscape, the kind you only recognise in retrospect as the hinge on which everything turned. Cortical Labs has just handed us the latter. I suspect most of us are still catching up to what it actually means.
Their latest milestone: 200,000 lab-grown human neurons, cultivated on a microchip, trained to play Doom. Not a simulation of neurons. Not a neural-network metaphor running on silicon. Actual, living human brain cells — interfaced with a chip, receiving sensory input, generating motor output, and adapting in real time to a hostile virtual environment. The neurons learned. They responded to feedback. They got better.
We have crossed a line — not a bright line, but a line nonetheless — between treating the brain as inspiration for computing and treating it as the substrate.
I want to be precise about why this matters. Not as a headline. As an inflection point with compounding consequences for computing, medicine, cognition science, and the long arc of what it means to be human.
The Architecture of Biological Intelligence
To appreciate what Cortical Labs has achieved, you need to understand what they are working with. A human neuron is not a transistor. It is a living, metabolically active cell that fires electrochemical signals along branching axons, forms synaptic connections dynamically, prunes weak pathways, and strengthens effective ones — all powered by approximately 20 watts of glucose-driven energy.
Silicon chips, by contrast, are fixed-topology deterministic circuits. Once fabricated, their architecture is frozen. They scale by adding more transistors — a path that is running hard into the physical limits of atomic-scale lithography and the thermal catastrophe of multi-megawatt data centres.
Cortical Labs’ DishBrain platform, and now their commercial CL1 device, inserts living neurons into this equation as the adaptive processing layer. Electrodes read neuronal firing patterns and translate them into game-state inputs. The system’s outputs — the collective firing of the network — are interpreted as control signals. Reward and punishment are delivered not through backpropagation but through predictive coding principles: neurons receive stimulation that is more or less ordered depending on whether the system performed well.
They learn because neurons, by their very nature, seek to predict and control their environment
This is not a trick. It is an emergent property of biological neural networks that silicon has never been able to replicate — not because engineers haven’t tried, but because the property emerges from the wetware itself: its chemistry, its plasticity, its sensitivity to thermodynamic surprise.
Where This Hits Closest to Home
As a medical futurist, I am perhaps most animated by the healthcare implications — and they are substantial, operating across at least three distinct registers.
DISEASE MODELLING AT HUMAN FIDELITY
Neural organoids and cultured neuron systems are already being used to model Alzheimer’s, epilepsy, Parkinson’s, and treatment-resistant depression. The Cortical Labs advance accelerates this dramatically. When neurons are not merely cultured but trained — when they demonstrate adaptive behaviour, learning, and response to environmental feedback — they become far more faithful models of living neural tissue than static cultures.
Drug testing on human-fidelity neural models reduces the devastating gap between in-vitro results and clinical outcomes — the gap where promising drugs that work in animal models fail in human trials at enormous cost. Bio-computing platforms could compress the drug discovery timeline significantly while reducing dependence on animal testing.
NEURAL PROSTHETICS AND BCI CONVERGENCE
The Cortical Labs platform is, at its core, a brain-computer interface: neurons communicating with silicon and silicon communicating back. This is precisely the architecture underlying next-generation neural prosthetics.
Today’s prosthetics decode motor intentions from cortical signals with limited resolution. A biological computing substrate that can be trained — that adapts to the user’s neural patterns rather than requiring the user to adapt to fixed decoding algorithms — represents a quantum leap in prosthetic capability. For patients with spinal cord injuries, ALS, locked-in syndrome, or severe stroke, this is not an abstraction. It is the difference between a prosthetic hand that moves in fixed patterns and one that learns to move like their hand.
PERSONALISED NEUROLOGY
Perhaps most ambitiously, bio-computing platforms could enable personalised neurological simulation. If a patient’s neurons can be cultured from induced pluripotent stem cells — derived from a simple blood draw — and if those neurons can be trained on that patient’s specific disease context, we could test interventions on a living neural model of that specific patient before committing to treatment.
This is personalised medicine at a resolution current genomics cannot achieve. Genomics tells us about predisposition. A living neural model tells us about response.
The Questions We Cannot Defer
I want to close not with triumphalism but with the harder questions — the ones the medical and technology communities have an obligation to sit with seriously.
Moral status. At what point does a trained, adaptive, experience-responsive neural network acquire interests that deserve moral consideration? This is not a science fiction question. It is a bioethics question with a live edge, and the philosophical literature is already divided. We need institutional frameworks before the technology outpaces our capacity to govern it.
Access and equity. Biological computing, if it delivers on its promise, will initially be concentrated in well-capitalised institutions. The wetware-as-a-service model mitigates this somewhat — but the history of transformative technologies suggests that the gap between early adopters and late adopters tracks existing inequalities with painful fidelity. The global health implications are enormous, but they will only be realised equitably if access is designed in from the beginning.
What this does to our understanding of ourselves. If computation can be performed by living human neurons — if intelligence is not exclusively the province of biological brains or silicon chips but is a property that can be instantiated in hybrid wetware systems — then the categories we use to understand cognition, identity, and personhood are genuinely destabilised.
I do not think this is a reason to slow down. I think it is a reason to think harder — and to insist that the people building this technology are in active dialogue with ethicists, clinicians, regulators, and patients. Not as an afterthought, but as architects.
Cortical Labs has built something remarkable. What we do with it — the frameworks we develop, the questions we insist on answering, the equity we engineer into its distribution — will determine whether this inflection point is one we look back on with pride or with regret.
The silicon era of computing is not over. But its monopoly on intelligence — artificial or otherwise — just ended.









