Stephen Wolfram: A New Kind of Science
There is a photograph from the 2002 launch of A New Kind of Science that captures something important about the book’s reception: a thousand-page hardcover in a bookstore display, stacked in a pyramid, with a $44.95 price tag and a blurb that promises nothing less than a new foundation for all of science. The book had been in preparation for ten years. It would sell over a hundred thousand copies. It would receive some of the most enthusiastic and some of the most withering reviews in recent scientific publishing. And it would contain, buried in its appendices, a genuine mathematical result that its author had not proved himself and that he had gone to considerable lengths to control.
Understanding A New Kind of Science requires keeping these two things separate: what Wolfram showed, and what Wolfram claimed. They are not the same.
A Career Built on Precocity
Stephen Wolfram’s biography is almost parodically exceptional. Born in London in 1959, he published his first physics paper at the age of fifteen. He entered Oxford at seventeen. By twenty, he had earned a PhD in theoretical physics from Caltech. At twenty-two, in 1981, he became one of the youngest MacArthur Fellows. In 1988, he released Mathematica, the symbolic computation software that remains the standard tool in mathematics education and research worldwide, and which made him financially independent of the academic system.
That last fact matters. By the late 1980s, Wolfram had the resources to work however he wanted. He chose to spend a decade alone, systematically experimenting with one-dimensional, two-state cellular automata — the “elementary” CA that had been studied since the 1970s but never comprehensively surveyed. He called no one about his results. He published nothing. He worked, and the book grew.
The Empirical Program: What NKS Actually Contains
A New Kind of Science is, at its empirical core, a systematic experimental survey of all 256 elementary cellular automaton rules. These rules operate on one-dimensional binary grids, where each cell’s next state depends on its current state and the states of its two neighbors. There are 2^8 = 256 such rules, each uniquely identified by a number from 0 to 255 (the Wolfram rule numbering system, still standard today).
What Wolfram found, and documented in extraordinary detail across hundreds of pages, was that the behavioral complexity of these 256 rules was distributed in a striking and non-obvious way. Most rules were boring: they produced either uniform states or simple periodic patterns. But a few rules produced behavior of startling complexity.
Rule 30, in particular, caught Wolfram’s attention. Starting from a single live cell, Rule 30 generates a pattern that appears, by every statistical test Wolfram could apply, to be random. Not pseudo-random — genuinely random-seeming, in the sense that no regularities can be detected in the resulting string of bits. Wolfram has used Rule 30 as the random number generator in Mathematica since 1988, and it has never been broken. Whether a deterministic rule can produce true randomness is a deep question; Rule 30 is the best practical evidence that something very close to it is possible.
Rule 110 was more significant theoretically. Wolfram had identified it as a candidate for universal computation — a one-dimensional, two-state CA that could compute anything a Turing machine could compute. The proof, if it existed, would be one of the most striking results in theoretical computer science: a simpler universal computer than anything previously known.
The proof existed. But Wolfram had not found it.
Matthew Cook and the Proof That Was Suppressed
In the 1990s, Wolfram employed a research assistant named Matthew Cook to help with the work that would become A New Kind of Science. Cook, a graduate student with deep mathematical talent, took on the Rule 110 problem and solved it. He constructed a proof that Rule 110, operating on a specific repeating background pattern, could simulate a tag system (a form of Turing-complete computation), and thus that Rule 110 was universal.
This was, by any standard, a remarkable mathematical result. Cook presented it at the CA98 conference at the Santa Fe Institute — before Wolfram’s book was published.
Wolfram’s response was to sue. Wolfram Research obtained a court order blocking the publication of Cook’s proof in the CA98 proceedings, arguing that Cook had violated a non-disclosure agreement by presenting unpublished work that Wolfram considered proprietary. The existence of a mathematical proof was, in Wolfram’s framing, a trade secret.
The legal dispute delayed Cook’s proof for years. A New Kind of Science was published in 2002 with an outline of the proof, attributed to Cook in a footnote. Cook’s full paper, “Universality in Elementary Cellular Automata,” was finally published in 2004 in Wolfram’s own journal, Complex Systems — five years after Cook had proved the result, and only after Wolfram had made the outline public in his book.
The episode illuminates something important about the sociology of A New Kind of Science. Wolfram had, over the course of a decade, assembled a research program that was genuinely productive. But his relationship to that research — his insistence on controlling it, his self-presentation as its sole discoverer — did not match how science normally works. The Rule 110 result was not Wolfram’s to control. It was Cook’s result, and its delayed publication was a straightforward case of scientific suppression.
Cook is now a neuroscientist at the ETH Zurich; he has said little publicly about the episode.
The Principle of Computational Equivalence
The empirical core of A New Kind of Science is real science, carefully done. The theoretical superstructure is where the book becomes controversial.
Wolfram’s central philosophical claim is the Principle of Computational Equivalence (PCE): virtually all processes that are not obviously simple are computationally equivalent — they are all capable, in some sense, of universal computation. The implication is that human brains, biological ecosystems, weather systems, and Rule 110 are all “the same” in a deep computational sense. No substrate is more powerful than any other. The universe does not have a preferred level of computational sophistication.
The PCE is elegant and, in a loose sense, has some truth to it. The observation that universal computation arises in many simple systems — Life, Rule 110, tag systems, Wang tiles — is genuinely interesting and not obvious. Wolfram assembled more examples of this phenomenon than anyone before him.
But as a scientific claim, the PCE has serious problems. It is not precisely stated — “virtually all processes that are not obviously simple” is not a mathematical definition, and “computationally equivalent” is used in a weaker sense than the standard technical meaning. It makes no testable predictions — there is no experiment that could falsify the PCE as Wolfram states it. And it papers over what are, in computational complexity theory, deeply important distinctions: between systems that can compute any function and systems that can compute it efficiently, between undecidable problems and merely hard ones, between Turing completeness and P vs. NP.
Scott Aaronson, the complexity theorist, put it sharply: the PCE “shows a misunderstanding of what computational complexity is about.” The interesting questions in computation are not “can this be computed at all?” but “how hard is it to compute?” And on those questions, the PCE is silent.
Cosma Shalizi, in what became one of the most widely-read reviews of NKS, was more broadly critical: the book’s failure to engage with the existing scientific literature on complex systems, dynamical systems, and computational complexity meant that many of its “new” discoveries had precedents that Wolfram apparently was not aware of or chose not to acknowledge.
The Physical Universe Hypothesis
Wolfram did not stop at computational equivalence. The book’s final section proposes that the physical universe is itself a computational process — specifically, some kind of spatial network automaton updating according to a simple rule. Physical laws, on this view, are emergent properties of the computational structure of the universe. Space, time, matter, and energy are all patterns in an underlying network.
This idea had precedents that Wolfram barely acknowledged. Edward Fredkin had proposed “digital physics” in the 1970s and 1980s. Konrad Zuse had suggested in 1969 that the universe was a computation. The connection between these traditions and A New Kind of Science is real but unacknowledged.
Most physicists found the physical universe hypothesis unpersuasive, for a specific reason: it made no concrete predictions. A theory of everything that cannot tell you the mass of the electron, or the fine structure constant, or the ratio of dark energy to dark matter, is not yet a theory of everything. It is a framework for a theory. Wolfram acknowledged this but argued that finding the specific rule would come later.
The 2020 Physics Project
In April 2020, Wolfram announced the Wolfram Physics Project with a 448-page technical exposition, “Finally We May Have a Path to the Fundamental Theory of Physics.” The project proposes that the universe emerges from the repeated application of simple rewriting rules to abstract graphs — “hypergraphs” in which nodes are abstract entities and edges represent their relationships. Space is the large-scale structure of the hypergraph; time is the process of applying the rules; particles and forces are persistent patterns in the evolving structure.
The project has produced genuine mathematical work, much of it by Wolfram’s collaborators and by researchers in his open-science program. It has identified candidate rules that reproduce, in the appropriate limit, features of general relativity and quantum mechanics. It has attracted serious attention from some physicists, particularly those interested in discrete approaches to quantum gravity.
The scientific community’s reception has been cautious. The framework is not obviously wrong — it is in the tradition of serious research programs like loop quantum gravity and causal set theory. But it has not yet produced the falsifiable predictions that would make it testable, and the same charge of imprecision that haunted NKS has been leveled against the Physics Project.
What Wolfram Got Right
The scientific community’s criticism of A New Kind of Science is fair. The book is over-claimed, under-proven, and insufficiently connected to existing literature. But it is not without genuine contributions.
The rule numbering system that Wolfram introduced for elementary CA — numbering each of the 256 rules by the decimal value of their output table — is now standard. Every paper on elementary CA uses Wolfram’s notation.
The systematic experimental survey of all 256 rules was genuinely valuable. Wolfram documented, more completely than anyone before, the behavioral diversity of elementary CA and the surprising richness that could be found in simple rules. This is real empirical science.
The mainstreaming of cellular automata as a research tool is partly Wolfram’s doing. The publication of NKS, despite — or perhaps because of — its controversy, brought CA to the attention of a generation of researchers who would not otherwise have encountered them. The connection between CA, complexity, and physical modeling is more widely understood today partly because of the book’s reach.
The Rule 110 result (Cook’s result, published under Wolfram’s aegis) is a genuine theorem: the simplest known universal computer, a one-dimensional two-state CA with a three-cell neighborhood. Whatever one thinks of the surrounding claims, this is a real result in theoretical computer science.
Conway’s Life had already demonstrated that universal computation could arise in two-dimensional CA. Rule 110 pushed the same result into one dimension. Together, they constitute the strongest evidence available that universal computation is not an exotic phenomenon requiring elaborate machinery — it is a natural feature of simple rule systems operating in even the most constrained spaces.