A History of Cellular Automata: 1940–Present
The first conversations happened in the desert, during a war, between two of the most formidable mathematical minds of the twentieth century. Stanislaw Ulam and John von Neumann were colleagues at Los Alamos. They talked about crystals, about machines that could build other machines, about whether the rules of life — of growth, of reproduction, of complexity arising from simplicity — could be captured in a formal mathematical system.
That was the early 1940s. In 2020, some eighty years later, a video circulated online showing a pattern running inside Conway’s Game of Life: a working implementation of Life itself, the whole game running as a structure inside its own rules, taking 34 million generations to display. The video was twelve seconds long.
Between those two moments lies one of the strangest intellectual journeys in mathematics: a field born from wartime speculation, developed in near-total obscurity for twenty years, ignited by a single magazine column, and transformed by the internet into one of the most active recreational mathematics communities in the world. What follows is that journey, dated.
The 1940s: The Question
1940–1945: At Los Alamos, where the Manhattan Project had assembled an unprecedented concentration of scientific talent, John von Neumann and Stanislaw Ulam held the conversations that would eventually produce cellular automata. Ulam was studying crystal growth using discrete mathematical models — how complex structures could arise from simple, repeated local rules applied to a lattice. Von Neumann was wrestling with the question of self-replication: could a machine build a copy of itself, and if so, what was the minimum logical complexity required? Both were circling the same underlying insight without yet having a common language for it.
1948 (September 20): Von Neumann delivered “The General and Logical Theory of Automata” at the Hixon Symposium in Pasadena — the first time he publicly compared computing machines with living organisms and addressed self-reproduction as a formal mathematical problem. The lecture marks the intellectual birth of automata theory as a discipline. Around the same time, Ulam made the suggestion that changed the direction of von Neumann’s work: abandon the physical robot model and use a discrete, grid-based system instead. Self-replication, Ulam argued, could be studied as a property of abstract logical systems. The “cellular” part of the name is Ulam’s; the “automata” part is von Neumann’s.
The 1950s: Parallel Beginnings
1952–1953: At the Institute for Advanced Study, Norwegian-Italian mathematician Nils Aall Barricelli conducted some of the first computational experiments in artificial evolution. Working on von Neumann’s IAS machine, he populated a numerical universe with random values drawn from a shuffled deck of playing cards and observed phenomena he called parasitism, symbiogenesis, and speciation arising spontaneously from mutation and recombination rules. His results, published in Norwegian in 1954 and in English in 1957, were almost entirely ignored. The field that would vindicate him — artificial life — would not formally exist for another thirty years.
1952–1954: Von Neumann worked intensively on the design of a self-reproducing cellular automaton — a formal system in which a machine embedded in a grid could read a description of itself, construct a copy, and pass the description forward. The design required 29 possible cell states with a four-cell neighborhood (up, down, left, right). By around 1953 he had satisfied himself that the theoretical problem was solved and moved on, leaving the manuscript unfinished. The paradox is characteristic: once he could see that a thing was possible, he lost interest in completing the demonstration.
1957 (February 8): Von Neumann died of bone cancer in Washington, D.C., at the age of fifty-three. His cellular automaton manuscript existed as an incomplete set of notes and diagrams. The field he had co-founded would wait another nine years for its foundational text to be published.
The 1960s: Formalization and Isolation
1959–1965: Konrad Zuse, who had built one of the first programmable computers in the 1940s, developed his idea that the universe might itself be a computational process running on a discrete grid. He called the hypothesis Rechnender Raum — calculating space. Where von Neumann had used the cellular automaton to model self-replication, Zuse proposed it as a model for physics itself: continuous physical laws, he argued, were approximations of underlying discrete rules. This was a radical and, at the time, largely unfollowable claim.
1966: Arthur Burks published Theory of Self-Reproducing Automata — completing and annotating von Neumann’s unfinished manuscript from the 1950s. The book gave the field its foundational text a full decade after von Neumann’s death, and immediately triggered a new wave of work: Edgar Codd began designing a simpler self-reproducing CA, and the theoretical vocabulary of the field — states, transitions, neighborhoods, universality — started to stabilize.
1968: Edgar F. Codd published a cellular automaton achieving what von Neumann’s 29-state system achieved — self-reproduction and universal computation — using only 8 states. This demonstrated that von Neumann’s complexity was not fundamental to the result. Codd did not provide a complete working implementation (it was so large that a full simulation would wait until 2009), but the theoretical point was established.
1969: Zuse published Rechnender Raum (translated by MIT the following year as Calculating Space) — the first work of what would be called digital physics. Whether the claim that computation underlies all physical law is correct, it inaugurated a line of inquiry running through Fredkin’s cellular automaton physics, Wolfram’s A New Kind of Science, and into contemporary work on the computational universe.
1970: The Year Everything Changed
October 1970: Martin Gardner published “The Fantastic Combinations of John Conway’s New Solitaire Game ‘Life’” in his Mathematical Games column in Scientific American. The column described a two-state, two-dimensional cellular automaton devised by John Horton Conway at Cambridge: four rules, infinite grid, unlimited consequence. Conway offered a $50 prize to the first person who could prove or disprove his conjecture that no finite pattern could grow indefinitely. Gardner’s column — read by hundreds of thousands of people, many of whom had access to early computers — detonated. Read more about what happened that month →
November 1970: Bill Gosper, a mathematician and hacker at MIT, led a team that discovered the Gosper Glider Gun — a stable, oscillating pattern that emits a new glider every 30 generations, growing without bound. It was the answer to Conway’s conjecture (yes, infinite growth is possible), it claimed the $50 prize, and it established that Life contained structures of a kind no one had anticipated: machines that manufactured other machines, indefinitely. The glider gun became the most important single pattern discovery in the game’s history, not because of what it was but because of what it made possible.
The 1970s: The Community Forms
1971: Gardner published a follow-up column reporting on the Gosper gun and the emerging zoo of named patterns, introducing another wave of readers to Life. That same year, economist Thomas Schelling published his segregation model — showing that individuals with mild preferences for same-group neighbors reliably produce highly segregated communities — working with actual coins on graph paper. Schelling’s model operates on the same principles as Life: local rules, no central control, emergent global order. It won him the Nobel Prize in Economics in 2005, and it remains the clearest social-science proof of what Life demonstrates mathematically.
The 1980s: Proof and Classification
1982: Elwyn Berlekamp, John Conway, and Richard Guy established the Turing completeness of the Game of Life in Winning Ways for Your Mathematical Plays — proving that within a large enough Life grid, you can embed a working universal computer built entirely from gliders and other patterns. Life was not merely an interesting puzzle but a formal computational substrate as powerful as any machine ever conceived.
1983–1984: Stephen Wolfram, working at the Institute for Advanced Study, published a systematic classification of one-dimensional cellular automata into four classes. His paper “Universality and Complexity in Cellular Automata,” published in Physica D in 1984, described Class I systems (all cells converge to a single state), Class II systems (periodic stable structures emerge), Class III systems (chaotic, apparently random patterns), and Class IV systems (complex, long-lived local structures — the class to which Life belongs). Wolfram’s classification gave the field a shared theoretical vocabulary and established that the complexity of a CA’s behavior did not scale simply with the complexity of its rules. Read more about Wolfram’s program →
1984: Christopher Langton published a self-reproducing loop in an 8-state cellular automaton — only 86 cells — that replicated itself by extending an arm, completing a loop, and detaching the offspring. Unlike von Neumann’s design, Langton’s loops required no universality: they reproduced without doing arbitrary computation. The simplification was the point. Self-reproduction didn’t require the full logical complexity of a universal computer — it was, in some sense, a simpler and more fundamental phenomenon.
1986: Craig Reynolds developed Boids — a simulation in which each agent follows three local rules (avoid crowding, align heading with neighbors, steer toward the group’s center of mass) and produces realistic-looking flocks. Reynolds’ SIGGRAPH 1987 paper was not formally cellular automata, but it demonstrated the same principle in a new substrate: local rules producing global patterns that appear designed but are not. Boids became one of the most cited examples of emergence in computer science.
1987: Christopher Langton organized the first Workshop on the Synthesis and Simulation of Living Systems at Los Alamos National Laboratory — the founding event of the artificial life field. Cellular automata had given researchers a concrete substrate for studying life-like behavior, and the field had accumulated enough results and practitioners to warrant its own conference. The proceedings, published by the Santa Fe Institute, established ALife as a distinct discipline.
The 1990s: Expansion
1990: Langton described Langton’s Ant — an ant on a grid following two rules: turn right on white cells, turn left on black, flip the color, advance. For the first few thousand steps, the ant traces apparently random patterns. Then, around step 10,000, it begins constructing a diagonal highway — a 104-step repeating pattern that extends forever. Nobody has proven why this transition occurs. The ant became a canonical example of CA systems harboring surprising order at timescales inaccessible to intuition.
2002: A New Kind of Science
May 14, 2002: Stephen Wolfram published A New Kind of Science — a 1,197-page argument that simple computational rules, including cellular automata, are the fundamental framework for understanding physical, biological, and social phenomena. The popular press was enthusiastic; the scientific community was divided, accepting Wolfram’s computational results but disputing his claim of a paradigm shift and arguing he had underestimated prior work. Whatever the verdict, NKS introduced cellular automata to an enormous new audience and generated a decade of debate about the relationship between computation and physical law. Read more →
2010: Self-Replication Achieved
May 2010: Andrew Wade announced the Gemini — a self-replicating spaceship in Conway’s Game of Life. The pattern consisted of two identical construction units connected by an instruction tape. One unit built a copy of the whole pattern at a distance while the original destroyed itself; the offspring then repeated the process, moving diagonally across the grid. The Gemini required approximately 34 million generations to complete one replication cycle. In doing so, it fulfilled — in Conway’s simple two-state system — the goal that von Neumann had set for himself in the 1940s: a machine that reads its own description and constructs a copy. Von Neumann had needed 29 states and a system of enormous theoretical complexity. Life needed two.
The 2010s–Present: New Directions
2013–present: The LifeWiki grew into a comprehensive encyclopedia of thousands of named patterns and proof sketches. Distributed search projects running on volunteer computers via the Catagolue platform began systematically cataloguing every possible small starting configuration — accumulating records from trillions of individual pattern evolutions. Meanwhile, automated search tools discovered patterns with extraordinary lifespans: configurations that run for billions of generations before stabilizing, exploiting Life’s computational universality to build internal timers and delay mechanisms. The gap between the simplest patterns (a Block stabilizes in one step) and the most complex is not just large — it is, in a formal sense, infinite.
April 11, 2020: John Horton Conway died in New Brunswick, New Jersey, of COVID-19, at eighty-two. He had regarded the Game of Life with characteristic ambivalence — it was not, he said, his most interesting work, and he sometimes resented the way it overshadowed the surreal numbers and his work in group theory. But he understood what it had done, and he was not immune to the pleasure of watching thousands of people spend their lives in a mathematical universe he had built from four rules. Read more about Conway →
2020: Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, and Michael Levin published “Growing Neural Cellular Automata” in Distill, showing that CA update rules could be learned by neural networks rather than hand-designed. Their system trained a network to serve as the local rule, then used the trained CA to grow target shapes from single seed cells and regenerate them after damage. Where the original CA researchers asked what rules could produce self-replication, the neural CA researchers asked: given a target behavior, what rules would produce it? The same question, inverted.
What the Timeline Reveals
Reading these eighty years in sequence, a pattern emerges. The field did not develop steadily — it developed in spasms, each triggered by a different catalyst: a personal conversation (Ulam and von Neumann), a posthumous publication (Burks), a magazine column (Gardner), a proof (Berlekamp, Conway, and Guy), a classification system (Wolfram), a book (NKS), a self-replicator (Wade). Each catalyst was heterogeneous — mathematical, technological, cultural — but each one pulled the field to a level the previous catalyst had made possible.
The acceleration after 2000 reflects computing power and the internet working in combination. Patterns that took months to find by hand in the 1970s can now be found in milliseconds. The self-replicating structures that would have required decades of manual simulation can be run in minutes. And the internet transformed a scattered, mail-dependent community into a continuously connected global research collective — which is why the Gemini was discovered not in a university lab but by someone working alone with a home computer, in a tradition running in a direct line back to Gosper’s team at MIT in 1970.
The deepest continuity is not about computing power but about the question. Von Neumann asked: can a formal system reproduce itself? Barricelli: can computation model evolution? Conway: can simple rules produce interesting complexity? Wolfram: can computation explain physical law? Mordvintsev: can a network learn to grow? Each question is different, but each is a version of the same question: what is the minimum required for the maximum to be possible?
That question has no final answer. It has only better and better approximations.
Where to Go From Here
- The Origins of the Game of Life →
- John von Neumann and the Dream of Self-Replication →
- Martin Gardner and the Column That Changed Everything →
- October 1970: The Month Life Escaped Into the World →
- Stephen Wolfram and the New Kind of Science →
- Self-Replication: When Life Builds Itself →
- Life and the Machine: CA in Computer Science →