I recently watched this YouTube video featuring Stephen Wolfram expounding his theory of cellular automata. I have tinkered around with cellular automata in the past myself, but it seems that Wolfram is trying to develop the notion into the ultimate killer science. Looking around on the internet, however, it seems that not everybody is impressed with Wolfram’s lack of modesty and unwillingness to give credit. For myself I was surprised that in his lecture Wolfram didn’t at least make some mention of researchers like Kolmogorov and Gregory Chaitin. But this totalizing kind of theorist often goes together with a large ego. I hear rumor the Benoit Mandelbrot was of a similar type.
Nevertheless, the lecture was fascinating stuff. Wolfram believes that in order for our science to make further progress, especially in the area of complexity, new kinds of primitive are required. Needless to say, Wolfram believes that cellular automata are those primitives. He showed us how a variety of objects that cellular automate are capable of generating, from the complexity of high disorder through patterns that looked like particle collisions, to spatial geometries that had the properties of gravitational curvature. Of particular interest to me were Wolfram’s references to touring automata which build up structure by weaving a single thread of time, like a kind of continuous scan line, through reality. But he pointed out a problem with touring automata; the reality they generate may be dependent on the order in which the automata visits various locations. Hence a class of automata is required that is insensitive to the update order and Wolfram referred to these kinds of automata as “causally invariant”. He claimed that causally invariant automata give rise to relativistic like consequences. But what about quantum mechanics and the non-locality of the EPR experiment? How do strictly local automata deal with this? I believe Wolfram has been challenged on this point, but during his lecture he made passing references to extra dimensional links between particles which could cater for this. Wolfram believes that cellular automata provide a very literal digital simulation of reality and this leads him to declare that in his view time and space are very different things, perhaps mirroring the distinction between memory space and computational steps.
On the face of it, it all looks rather impressive, although Wolframs treatment of the EPR experiment does look to me a little contrived. Wolfram expressed the belief that he thought real physics to be digital and it seems that he really does believe that cellular automata capture some kind of reality about the universe. But just what is the nature of this correspondence? It is clear that standard Turing machines can simulate a large class of reality and yet we don’t conclude that the ontology of the cosmos is based on Turing machines. The question, then, is this: Is Wolfram simply propounding just another model of computation, like Turing machines, recursive functions, counter programs etc, or does cosmic ontology closely correspond to cellular automata? Wolfram is in fact facing the same question that I faced when I wrote my book on Gravity and Quantum Non-Linearity: Having arrived at some interesting results using the particular model I employed I conceded that: “The work in this book may be little more than a novel way of simulating quantum mechanics and relativity for the price of one”. But then I went on to say: “All theories are, I believe, simulations, but the crucial question is, do those simulations have a life of their own in as much as artifacts of their construction anticipate aspects of reality as yet unknown?” I then tried to show how my proposal did make predictions, and that is precisely what Wolfram must also do: Can he show how cellular automata have an ontological reality that distinguishes them from a mere computation technique? Providing predictions is another point on which Wolfram has been challenged.
Reality aside, Wolfram’s model of computation, perhaps because it is very visual, does seem to bring out some results in the theory of computation very clearly. The work of people like Kolmogorov, Chaitin and others have made us aware of the notion that any given computational task has a lower limit on its computational complexity in terms of the minimum resources (such as memory space, initial program string length and number of computational steps) required to carry it out. Hence, given a particular computational task these resources cannot be indefinitely reduced and thus a task has an irreducible minimum of required resources. This idea is, I think, equivalent to Wolfram’s notion of computational irreducibility, a notion that comes out very clearly in Wolfram’s lecture when he says that a process is computationally irreducible if the only way to find out its result is by running it. Computational irreducibility follows (presumably) when a task’s computational resources are at an absolute minimum, in which case it is not possible to find an analytical solution that arrives at the same result any quicker or easier by using less resources. From this follows Godel’s and Turing’s undecidability theorems: if a process is computationally irreducible and the only way we are going to find out if it stops is by running it, then its halting point may be too far off for it to be humanly possible to make the halting question decidable. Wolfram may not be original here, but I personally found Wolfram’s visualization of Godel and Turing helpful.
On the subject of computational equivalence Wolfram is less compelling. Simple processes can produce the high complexity of disorder, but simple processes, which are relatively small in number, merely sample a very small part of the huge class of complex and disordered configurations. The only way of reaching this huge class is to start with complexity. Accordingly I am not sure I know what Wolfram means when he talks about a “maxing out” of complexity. Complexity of process seems to have no upper limit. Perhaps I do not understand Wolfram aright but it seems to me that there is a huge platonic realm out there of highly complex algorithms, whose rules are beyond our ken, algorithms whose computational resources are too great to be humanly manageable. Let’s just thank God that much of our world, seems readily reducible too simple computations.
Towards the end of the video, during the question time, Wolfram appeared not to directly answer a query about just what light his work throws on the metastable states of life, structures that make extensive use of cybernetics in order to respond to an open ended world of random perturbations. He did at one point show us a picture of a cellular automaton run that generates rare conditions whereby a particular pattern starts to take over the available space. But all this was worked out in Wolfram’s strictly deterministic world. If our world has an open endedness about in that it is perturbed by a complexity that is not reducible to simple algorithms, then it is anybody’s guess what kind of perturbations could arise to undermine the “determinism” of such patterns.
Wolfram’s concepts certainly cover a lot of ground but perhaps not everything. Although Wolfram may have found a useful tool of visualization and understanding, some aspects of our world, if they should exist, may not be naturally amenable to his computational model; for example, non-locality, algorithmic irreducibility, and a cosmos where there is parallelism in time.
Nevertheless, the lecture was fascinating stuff. Wolfram believes that in order for our science to make further progress, especially in the area of complexity, new kinds of primitive are required. Needless to say, Wolfram believes that cellular automata are those primitives. He showed us how a variety of objects that cellular automate are capable of generating, from the complexity of high disorder through patterns that looked like particle collisions, to spatial geometries that had the properties of gravitational curvature. Of particular interest to me were Wolfram’s references to touring automata which build up structure by weaving a single thread of time, like a kind of continuous scan line, through reality. But he pointed out a problem with touring automata; the reality they generate may be dependent on the order in which the automata visits various locations. Hence a class of automata is required that is insensitive to the update order and Wolfram referred to these kinds of automata as “causally invariant”. He claimed that causally invariant automata give rise to relativistic like consequences. But what about quantum mechanics and the non-locality of the EPR experiment? How do strictly local automata deal with this? I believe Wolfram has been challenged on this point, but during his lecture he made passing references to extra dimensional links between particles which could cater for this. Wolfram believes that cellular automata provide a very literal digital simulation of reality and this leads him to declare that in his view time and space are very different things, perhaps mirroring the distinction between memory space and computational steps.
On the face of it, it all looks rather impressive, although Wolframs treatment of the EPR experiment does look to me a little contrived. Wolfram expressed the belief that he thought real physics to be digital and it seems that he really does believe that cellular automata capture some kind of reality about the universe. But just what is the nature of this correspondence? It is clear that standard Turing machines can simulate a large class of reality and yet we don’t conclude that the ontology of the cosmos is based on Turing machines. The question, then, is this: Is Wolfram simply propounding just another model of computation, like Turing machines, recursive functions, counter programs etc, or does cosmic ontology closely correspond to cellular automata? Wolfram is in fact facing the same question that I faced when I wrote my book on Gravity and Quantum Non-Linearity: Having arrived at some interesting results using the particular model I employed I conceded that: “The work in this book may be little more than a novel way of simulating quantum mechanics and relativity for the price of one”. But then I went on to say: “All theories are, I believe, simulations, but the crucial question is, do those simulations have a life of their own in as much as artifacts of their construction anticipate aspects of reality as yet unknown?” I then tried to show how my proposal did make predictions, and that is precisely what Wolfram must also do: Can he show how cellular automata have an ontological reality that distinguishes them from a mere computation technique? Providing predictions is another point on which Wolfram has been challenged.
Reality aside, Wolfram’s model of computation, perhaps because it is very visual, does seem to bring out some results in the theory of computation very clearly. The work of people like Kolmogorov, Chaitin and others have made us aware of the notion that any given computational task has a lower limit on its computational complexity in terms of the minimum resources (such as memory space, initial program string length and number of computational steps) required to carry it out. Hence, given a particular computational task these resources cannot be indefinitely reduced and thus a task has an irreducible minimum of required resources. This idea is, I think, equivalent to Wolfram’s notion of computational irreducibility, a notion that comes out very clearly in Wolfram’s lecture when he says that a process is computationally irreducible if the only way to find out its result is by running it. Computational irreducibility follows (presumably) when a task’s computational resources are at an absolute minimum, in which case it is not possible to find an analytical solution that arrives at the same result any quicker or easier by using less resources. From this follows Godel’s and Turing’s undecidability theorems: if a process is computationally irreducible and the only way we are going to find out if it stops is by running it, then its halting point may be too far off for it to be humanly possible to make the halting question decidable. Wolfram may not be original here, but I personally found Wolfram’s visualization of Godel and Turing helpful.
On the subject of computational equivalence Wolfram is less compelling. Simple processes can produce the high complexity of disorder, but simple processes, which are relatively small in number, merely sample a very small part of the huge class of complex and disordered configurations. The only way of reaching this huge class is to start with complexity. Accordingly I am not sure I know what Wolfram means when he talks about a “maxing out” of complexity. Complexity of process seems to have no upper limit. Perhaps I do not understand Wolfram aright but it seems to me that there is a huge platonic realm out there of highly complex algorithms, whose rules are beyond our ken, algorithms whose computational resources are too great to be humanly manageable. Let’s just thank God that much of our world, seems readily reducible too simple computations.
Towards the end of the video, during the question time, Wolfram appeared not to directly answer a query about just what light his work throws on the metastable states of life, structures that make extensive use of cybernetics in order to respond to an open ended world of random perturbations. He did at one point show us a picture of a cellular automaton run that generates rare conditions whereby a particular pattern starts to take over the available space. But all this was worked out in Wolfram’s strictly deterministic world. If our world has an open endedness about in that it is perturbed by a complexity that is not reducible to simple algorithms, then it is anybody’s guess what kind of perturbations could arise to undermine the “determinism” of such patterns.
Wolfram’s concepts certainly cover a lot of ground but perhaps not everything. Although Wolfram may have found a useful tool of visualization and understanding, some aspects of our world, if they should exist, may not be naturally amenable to his computational model; for example, non-locality, algorithmic irreducibility, and a cosmos where there is parallelism in time.
You wrote: "I was surprised that in his lecture Wolfram didn’t at least make some mention of researchers like Kolmogorov and Gregory Chaitin." I assume, then, that you have not read Wolfram's NKS book (which is also online). There are no references in the (printed) text (somewhat controversial), but my favourite part of his book is the extensive Notes for each chapter. You can search these at http://www.wolframscience.com/nksonline/search/?collection=nks&query=chaitin.
ReplyDeleteYou wrote: "Simple processes can produce the high complexity of disorder, but simple processes, which are relatively small in number, merely sample a very small part of the huge class of complex and disordered configurations. The only way of reaching this huge class is to start with complexity." Wolfram would disagree with you, as would I. Have a read of the (online) section on the Principal of Computational Equivalence and the section on Universality http://www.wolframscience.com/nksonline/page-642
Hi Paul, thanks very much for the comments.
ReplyDeleteThis is my first foray into Wolfram land, so I’ve still got a lot of ground to cover. Thanks very much for the links.
I have several reasons to naturally identify with someone like Wolfram, so I was rather dismayed to read about his personal reputation in some quarters. Perhaps some of it is sour grapes! I’m glad to hear that he provides acknowledgments in his notes.
On complexity & simple processes: Obviously something like a simple counting algorithm, which is systematically visiting the whole of configuration space, will, given enough time, generate all configurations up to the length of the available media. Moreover, any simple deterministic process that doesn’t eventually repeat itself in time will visit every available state, simple and complex.
However, I was actually thinking of small or simple algorithms that “rapidly” generate large complex configurations in polynomial time (e..g random sequence generators). Here my understanding is this: The number of strings representing small (=simple) algorithms is limited by the number of combinations of symbols available to a small string and this puts an upper limit on the availability of small programs. Thus the population of small programs is too small to map via polynomial time connections alone to the whole of the huge hinter land of large complex configurations; ergo small programs can only sample the space of complexity if we insist on polynomial time connections only. The reasoning here seems fairly robust to me, but I will certainly consult Wolfram on this point as he is clearly a man we should be listening to. So thanks again for the link.
The foregoing point interests me greatly because it impinges upon the evolution/ID debate. Currently I would probably go along with Wolfram’s view that the complex configurations of life are an outcome of simple processes. However, as life seems to have arisen in polynomial time my conclusion is that the cosmos is one of those rare physical regimes that somehow encodes life in simple laws. Life, then, would be one of those rare polynomial time mappings from simple to complex. Thus on my view life is not a result of a process that systematically or randomly exhausts the space of possibilities. (I’m not a multiverse fan)