Stephen Wolfram
A new kind of science
Wolfram Media, 2002

What can you say about a book on science, written for a popular audience, which the author claims extends the scientific method into new domains? That the pictures are beautiful and the author is brilliant? That the author has immense confidence in his assessment of physics, biology, mathematics, computing science, and himself?

You simply cannot dismiss Stephen Wolfram as another of those nutty guys. His arguments run into 850 pages of small font. Over years of work with his Mathematica software, he has computed the 973 pictures in the book for the reader to visualize the ideas being developed. And there are 350 pages of footnotes in even smaller font, which make up a mini-encyclopaedia that encompasses not just science, but has quick comments on animism, art, atomism, postmodernism, relativism, textile making, theology ...

After laying out his purpose in writing this book in Chapter 1, Wolfram begins in Chapter 2 with the cellular automaton model (which I will take the liberty of abbreviating as CA), originally developed by John von Neumann. Chapter 3 describes extensions of the basic model which allow many dimensions, many colours and so on, showing how Turing machines, register machines, tag systems and substitution systems can be represented using CA. Chapter 5 has more sophistication such as fractals, ``multiway'' (nondeterministic) and ``network'' (graph rewriting) systems.

The novelty lies in the visual presentation of the evolution of these ``simple programs'' over thousands of steps. Wolfram notes that even the simplest (one-dimensional) CA model can produce complex behaviour. On the basis of his pictures of their evolution, Wolfram puts CA behaviour into four ``complexity'' classes: 1) repetitive, 2) nested, 3) random (this is similar to what is elsewhere called ``fractal''), and 4) random with localized structure. Chapter 6 studies this classification in some detail and shows how initial conditions can lead a system into different classes of behaviour.

Chapter 4 describes the behaviour of recursive sequences of numbers using the same techniques of visualization. A continuous model of CA is presented and visualized. Wolfram conjectures that the same classification of complexity will reappear in continuous systems.

Chapter 7 is more ambitious: Wolfram proceeds to examine how random behaviour might be generated in natural systems (as seen through the prism of CA). First there is random input from the environment at every step of the CA evolution (as in Brownian motion). Or it could be just the random initial conditions of such a system producing varying behaviour (as in chaos theory). But it is the ``intrinsically random'' behaviour produced by his more complex classes of CA rules that Wolfram favours. (In the footnotes, Wolfram says this was ``greeted with much hostility'' by some chaos theorists in 1985.)

A large part of Chapter 8 is a polemic on biology. Wolfram thinks biology is too obsessed with natural selection and maximal fitness as the principles which generate complexity. (In the footnotes, Wolfram cites discussions with ``many biologists.'')

Wolfram hazards the thought that complexity is produced by randomness. For instance, higher organisms evolved not because viruses and bacteria were not doing well but by random mutations. But biologists have thought of this before, and even written popular books about it: See Life's grandeur by Stephen Jay Gould (Vintage, 1997). The art of genes by Enrico Coen (Oxford, 1999) gives examples of growth and development quite similar to those offered by Wolfram.

Chapter 9 turns Wolfram's gaze onto physics. He transports the observation that localized structures can be found in the midst of seemingly random behaviour in his CA-based systems into a thesis on how self-organized systems evolve in the presence of the second law of thermodynamics. Wolfram suggests space could be discrete at a very micro level. (Fredkin, Minsky and Feynman are mentioned in the footnotes.)

For instance, Wolfram says, elementary particles could be localized structures in a rule-based system which builds up space. But this is no more detailed than Feynman's one-line comment in The character of physical law. No mention is made, for example, of the work of Nobel physicist 't Hooft, who started from the 80s trying to work out the physics of quantized space and time. Even the use of cellular automata to discretize science is not new, it was proposed by Edward Fredkin in the 80s under the rubric that the whole universe is a cellular automaton.

Kolmogorov-Solomonoff complexity theory defines random behaviour as one where no essentially shorter description (than the behaviour itself) exists. In Chapter 10, Wolfram modifies this to say that no essentially simple program can find a shorter description of the behaviour. This means that the ``random'' class of behaviour defined for CA above is defined to be random. Some of the work of the Swedish logician Per Martin-Löf in the 60s and 70s offers similar definitions of randomness.

Can there be a universal CA which simulates all other CA? von Neumann's interest in developing CA was to study self-reproducing systems, which has some bearing on this question. The footnotes mention the Game of Life, a 2-dimensional CA able to simulate a universal TM, discovered by the mathematician John Conway in the 70s. In Chapter 11, Wolfram presents a 1994 discovery by Matthew Cook (unpublished because as Wolfram's employee he was not allowed to) of a universal 1-dimensional CA which can simulate other CA. What is presented is a visualization; hopefully Cook's proof will appear in the literature.

Chapter 11 ends with some interesting conjectures. Based on the ``look'' of the evolution of some 2-state Turing machines over a 3-symbol alphabet, Wolfram conjectures that there is a universal Turing machine in this class. And he says there even may be a 3-state 2-symbol universal TM. This is a bold step from Minsky's 7-state 4-symbol UTM of 1962.

Finally, Wolfram comes to his Principle of Computational Equivalence in Chapter 12, which he calls the original part of this book. This Principle says that all processes, artificial or natural, can be viewed as computations of equivalent sophistication. This, Wolfram argues, is the reason we see so much complexity around us. Any processes (except a few simple ones), show behaviour that to us appears complex. Our processes of perception and analysis, which are also processes like these, cannot but find them complex.

It follows from Turing's results that for most systems, there is no way to predict how they will behave except by simulating them. Wolfram argues that the visual depiction of evolution of behaviour used in hundreds of pictures in this book is necessary, and cannot be reduced to elegant mathematical formulas. There are, in short, fundamental limitations to our science and knowledge.

What then is the use of science given this ``computational irreducibility'' (undecidability) all around us? Wolfram says his pictures demonstrate that very simple models might be useful to explain very complex behaviour. The new kind of science that Wolfram is suggesting is to give up looking for sophisticated mathematical solutions for complex behaviour and to look for simulations based on simple computational (or rule-based) models. Wolfram also conjectures that continuous mathematics might turn out to be less powerful than the discrete approach, just as the analog computer hasn't been able to match the digital one.

* * *

There are physicists who will surprise you by analyzing something that seems far removed from science by thinking about it from a physicist's angle. There are mathematicians who, when they spot a pattern, will immediately start thinking about the equations which lead to it.

Wolfram is a computing scientist in this genre: he looks at all the physics, biology and mathematics he knows--and that is a hell of a lot--from the viewpoint of a computing scientist. He tries to conjure up the cellular automaton (a model he is intimately familiar with) that could have caused the pattern he sees.

The astonishing thing, as evidenced by this 1200-page tome, is how successful he is! Not being a Mathematica addict, I haven't tried running his code, but there is something for us computing scientists to learn from this book. One cannot substitute Wolfram's visual simulations of recursive sequences, substitution systems, Turing machines, and universal Turing machine simulations for our pedantic ``seven-tuple'' definitions, but they do provide a different kind of insight.

In retrospect, it shouldn't surprise us that much. We have seen beautiful pictures from the Hubble Space Telescope where image-enhancement brings out the structure of huge gaseous structures like stars and nebulae. We know that the science of chemistry has been transformed by programs which show 3-dimensional structures of complex molecules. We read how the Human Genome Project provides a new, information-based approach to doing biology. We realize that the word ``complexity'' has penetrated deep into the physics of systems which are not easy to describe using a few universal laws like gravity or electromagnetism.

Computing is indeed transforming science, and who better to present you with a vision of this than the creator of a programming language for doing scientific computing?

Following a time-honoured method--which I am tempted to call ``an old kind of science''--Wolfram goes further. He tries to see if he can abstract from this vision common universal principles. Alan Turing did this in his 1936 paper where he tried to view all of mathematical thinking in terms of a computing device. The work of several authors has shown that this notion of computability is robust, and we today believe in Church's thesis as a ``principle of computational equivalence.'' Wolfram's work can be seen as trying to extend Church's thesis into the physical sciences.

From his definition of computability, Turing was able to look at what is not computable, and identify one problem (halting) as being not computable. Turing also came up with his universal Turing machine, which von Neumann's genius was able to fashion into an actual physical computer.

Wolfram argues that complexity in the natural world can be caused by universal behaviour and outlines his Principle of Computational Equivalence, but is not able to produce an outstanding theorem like Turing did.

As an example of his own methodology, Wolfram considers the problem of free will, which he paraphrases as how the universe can follow definite laws, while human beings can make decisions which seem quite free of obvious laws. His ``solution'' is that all components of a brain may follow precise, definite laws, but the overall behaviour of the brain, the ``mind'', might be undecidable with respect to these laws. This is an idea, but it isn't a solution: there isn't a ghost of a proof. And it is hardly an original idea.

As another application, since the ability to do sophisticated computations is something which we associate with ``intelligence'', and since fairly simple systems can be universal and exhibit Turing powerful behaviour, Wolfram suggests that intelligence may not be something very special. Any definition of intelligence, he thinks, will just try and isolate special features of human intelligence. An implication reached from this is that a search for extraterrestrial intelligence will not be able to distinguish artificial signals from natural ones.

This Copernican idea of displacing human intelligence from the centre of the universe is not very profound. A definition of intelligence is useful if it succeeds in making distinctions, not if it confuses them. I am reminded of a question asked of John McCarthy when he proposed a definition of artifical intelligence in terms of feedback mechanisms: ``Does a thermostat think?''

The subject of complexity has grown over the last couple of decades, building the insights that Wolfram has visualized in this book. Wolfram acknowledges this only in his footnotes and partially at that. In the text and the footnotes, he frequently uses ``the discoveries made in this book'' as though he was the only discoverer. This is far from true. Wolfram also seems anxious for credit: in the footnotes, he says James Gleick's popular book Chaos (Viking Penguin, 1987) covers more than chaos theory and includes some of ``my'' results on CA---evidently the hostility of the chaos theorists still seems to rankle.

* * *

No one can take away from Wolfram the credit of putting together so many disparate ideas into a cohesive framework. On the debit side, most of the thousand-odd pictures which form the main novelty of the book have not been published in any refereed journal. The book itself is published by the author; not by a commercial publisher with a standard review process. Yet the media hype is enormous, and I recommend reading Jordan Ellenberg's article Blinded by science in Slate, July 2, 2002, on this subject. Web search should lead you to sites which have scores of reviews. Many reviews are much more detailed and critical than mine.

Put somewhat cynically, Wolfram's message to would-be scientists is: forget about doing mathematics, go on to doing Mathematica! But a more generous reading requires going back to a 1959 after-dinner lecture by Feynman, where he invited physicists to recognize that a computational approach to their subject was a ``new field of physics.'' (``There's plenty of room at the bottom,'' in Feynman and computation, J.G. Hey, editor, Perseus, 1999.) Wolfram extends Feynman's invitation to all interested readers and to all of science.