Before Conway: The Parallel Inventors of Cellular Automata
On the night of March 3, 1953, a mathematical biologist named Nils Aall Barricelli sat alone in a brick building at the end of Olden Lane in Princeton, New Jersey. It was 10:38 p.m. The daytime occupants of the building — meteorologists running weather models, weapons physicists calculating ballistics — had gone home. Barricelli had the machine to himself: the IAS computer, one of the most powerful computing engines in the world, built under the direction of John von Neumann.
Barricelli shuffled a deck of playing cards and used the draws to generate random numbers. Then he loaded those numbers into the machine as the starting configuration for a digital universe he had designed. He called his first logbook entry “Symbiogenesis problem.” What he was actually doing, though no one used this language yet, was running the world’s first artificial evolution simulation — a program in which digital organisms mutated, competed, and evolved on a computational grid.
No one paid much attention. Barricelli’s paper was published in 1954 in Methodos, a small Italian journal of methodology and philosophy of science, and went largely unread for decades. The world was not yet ready for what he had found.
This is the nature of the history before Conway: a set of independent discoveries that arrived too early, or in the wrong language, or in the wrong country, or at the wrong institution — and that only in retrospect reveal themselves as pieces of the same idea.
The Climate of the 1940s: When Computation Was New
Computation, as a concept, was brand new. Alan Turing had formalized the universal computing machine in 1936. The first electronic computers — ENIAC, the IAS machine, Colossus — were being built in the mid-to-late 1940s. The realization that you could simulate physical processes on a machine was electrifying and genuinely unprecedented.
In this context, extraordinarily gifted people began asking a question that had not previously been askable: what can a computation do? Not arithmetic — but in the broadest sense. Can a computation self-reproduce? Can it evolve? Can it model reality itself?
One parallel thread: McCulloch and Pitts had published their 1943 paper modeling neurons as binary threshold units — each neuron either firing or not firing based on its inputs, exactly like a cellular automaton cell responding to its neighbors. The structural parallel is not a coincidence; both are formalizations of the same intuition about local state and binary computation. Norbert Wiener and the cyberneticists at the Macy Conferences (1946–1953) were working the same vein.
Into this environment came von Neumann — and Ulam, and Zuse, and Barricelli — each arriving at similar ideas from different directions.
Stanisław Ulam and the Lattice Suggestion
Stanisław Ulam was born in Lwów in 1909, emigrated to the United States in 1939, and joined the Manhattan Project at Los Alamos. In the 1940s he became interested in crystal growth — specifically, how complex structures could arise from simple local rules governing how molecules attach to a growing lattice. He modeled this with a grid: each point influenced by its immediate neighbors, the whole system governed by rules applied simultaneously everywhere.
This is, structurally, a cellular automaton. But Ulam was studying crystals, not building a general theory. The generalization came through von Neumann.
John von Neumann, at the Institute for Advanced Study in Princeton, had been wrestling since at least 1948 with a fundamental problem: could a machine be designed that would construct a copy of itself? His initial approach was mechanical — a robot in a sea of parts, reading instructions from a tape and assembling a duplicate. The design was conceptually unwieldy. The robot needed to be impossibly specific.
Ulam pointed a way out. Replace the physical sea of parts with an abstract grid of cells, he suggested. Each cell holds a state; the system evolves by local rules. Self-replication becomes a purely formal problem.
Von Neumann adopted this framework. By approximately 1952, he had constructed a two-dimensional cellular automaton with 29 possible states per cell and proven that within this system a self-replicating configuration existed. The construction was enormous — roughly 200,000 cells — but it was there.
Von Neumann died of cancer in 1957 before completing the write-up. His notes were edited by Arthur W. Burks and published posthumously as Theory of Self-Reproducing Automata in 1966. That publication put cellular automata on the map for the next generation — including a young British mathematician named John Conway.
Konrad Zuse: The Universe as a Computation
Meanwhile, in wartime Germany, a different mind was following a different thread to a strikingly similar destination.
Konrad Zuse was a civil engineer who had become obsessed with computation. Working largely in isolation — and against the indifference of the Nazi state, which saw no military value in his work — he built the Z1 in his parents’ Berlin living room in 1938, followed by the Z3 on May 12, 1941: the world’s first freely programmable computer, operating in binary with floating-point arithmetic. Demonstrated in Berlin, noted by a handful of engineers, promptly overshadowed by the war.
Zuse knew nothing of what was happening at Princeton or Los Alamos. Isolated, he was led by his own path to a more radical conclusion than Ulam or von Neumann ever publicly articulated.
If a computer could simulate a physical system, Zuse reasoned, then perhaps the physical universe was a computational system — not merely analogous to one, but literally operating as one. Discrete cells of space, each evolving according to local rules, the apparent continuity of physics an emergent approximation of an underlying digital process.
These ideas crystallized into a book published in 1969: Rechnender Raum — translated by MIT as Calculating Space. It is the founding document of digital physics. Zuse argued that cellular automata, not differential equations, were the appropriate language for fundamental physics. Space itself consisted of discrete computational cells. The laws of physics were the update rules.
Calculating Space appeared the same year Conway began seriously experimenting with automaton rules — two people working in the same formal universe, arrived at by completely independent routes.
Barricelli’s Secret Organisms
Back at Princeton, Barricelli’s 1953 experiments had proceeded in a different direction entirely.
Where von Neumann was interested in self-replication as a logical problem — could it be proven to exist? — Barricelli was interested in evolution as a physical process. His digital organisms were not designed to reproduce in a formally proven way. They were numerical entities that moved through a one-dimensional space according to rules that allowed them to interact, to be “parasitized” by other patterns, to mutate, and to reproduce imperfectly. Barricelli was explicitly trying to model Darwinian evolution in a digital medium.
He succeeded, in a limited sense. His organisms evolved, diversified, and developed something that looked, to Barricelli’s eyes, like symbiosis. He published his results in 1954 in Methodos, a small Italian journal of methodology and philosophy of science, and again in 1957 in more developed form. The field of biology did not notice.
The invisibility of Barricelli’s work is one of the stranger facts in the history of computing. He was doing, in 1953, what the artificial life community would rediscover in the 1980s and celebrate as a breakthrough. The gap exists partly because evolutionary biology had no framework for thinking about digital organisms, and partly because the computer — von Neumann’s machine at the IAS — was primarily a tool of weather forecasting and weapons calculation, not biology. Barricelli was working at night, literally and figuratively, in borrowed time.
The Convergence
What is striking, in retrospect, is how many people arrived at the cellular automaton framework without knowing about each other. Ulam from crystal growth. Von Neumann from self-replication. Zuse from the nature of space. Barricelli from evolution. McCulloch and Pitts from neuroscience. All of them, between roughly 1943 and 1953, circling the same formalism: a grid of cells, discrete states, local rules, emergent global behavior.
When an idea is invented simultaneously by people who do not know each other, the usual explanation is that its time has come — that something in the intellectual environment has made it almost inevitable. For cellular automata, that something was the formalization of computation itself. Once Turing and von Neumann established what a computation was, and once physical computers existed to run them, the question “what can local rules do to a global system?” was bound to be asked.
Conway, working in Cambridge in 1968 and 1969, knew about von Neumann’s work — Theory of Self-Reproducing Automata had been published in 1966 and was circulating widely. By his own account he was directly following the research program Ulam and von Neumann had established. What Conway added was not the framework but the design sensibility: a deliberate search for the simplest rules that would produce the richest behavior. Von Neumann’s 29-state automaton was a technical proof of principle. Conway’s 2-state automaton was a playground — a universe that fit on a coffee table.
That shift, from formal proof to playful exploration, from 29 states to 2, is what made Life the thing that spread across the world. But the world it spread to had been prepared by decades of quiet, parallel, unrecognized work: a Norwegian biologist shuffling cards at midnight in Princeton, a German engineer building computers in his living room during a war, a Polish mathematician in New Mexico thinking about how snowflakes grow.
Conway found the right combination. But the idea had been waiting in multiple minds, in multiple countries, for twenty years.