Pages

Friday, September 18, 2009

Cellular Automata Part 3

In part 2 of this series I got to the stage where I accepted (with some help from Paul Abbots) that Wolfram’s principle of computational equivalence was at least a plausible enough conjecture to run with. As I understand it Wolfram’s conjecture is different to the Church-Turing thesis in that it ascribes universality not to computational models but to a class of single simple computations; according to Wolfram simple computations that look chaotic in output are, in most cases, likely to have the property of universality in that they can, with the right initial conditions, emulate any computation. Wolfram’s conjecture does have a deep intuitive plausibility because truly chaotic computations are forever generating novelty of configuration and thus one might expect that given enough time to create a long history of computation at some point they will emulate any chosen computation.

However, when I first came across Wolfram’s principle of computational equivalence in his YouTube lecture, this is how I responded:

On the subject of computational equivalence Wolfram is less compelling. Simple processes can produce the high complexity of disorder, but simple processes, which are relatively small in number, merely sample a very small part of the huge class of complex and disordered configurations. The only way of reaching this huge class is to start with complexity. Accordingly I am not sure I know what Wolfram means when he talks about a “maxing out” of complexity. Complexity of process seems to have no upper limit. Perhaps I do not understand Wolfram aright but it seems to me that there is a huge platonic realm out there of highly complex algorithms, whose rules are beyond our ken, algorithms whose computational resources are too great to be humanly manageable.

In saying that there is a huge class of complex and disordered configurations that are unreachable without the resource of a-priori complexity am I in conflict with Wolfram’s view? Yes and no. ‘Yes’ because if Wolfram’s principle is correct then in principle any configuration can eventually be reached from a simple single “chaotic” computation. (So in absolute terms what I have said above is wrong) And yet ‘No’ because I really only had practical computation in mind. In a very effective sense there is class of a configurations that are unreachable and this for the following reason. In most cases the program controlling a computation is a very much smaller object than the pattern of the computation itself; the reified computation often describes a very large object in both time and space. This means that the number of possible computations, in terms of combinatorial possibilities open to this huge domain of possible patterns, is far greater than then number of possible “small and simple programs”. Thus although according to Wolfram a small “chaotic” program may in time reach all possible configurations it will not be able to reach all configurations in realistic time, simply because there are too many of them. For small programs running on a given model of computation there will be a large class of configurations out of range as far as realistic computation times are concerned.

In part 2 of this series I conjectured that there was a complementary relationship between program size and computation times: Shortening program length increases the time lower limit needed to achieve a particular computation, whereas increasing program length by adding data allows the time lower limit to be reduced. Mathematically:

program length + computation time >= a constant for the given computation*

Or in abbreviated form:

P + T >= C,*

…. where I have sharpened up the notion of my original “equation” by relating the sum of P and T via an inequality to a constant that is set by the computation in hand. I have been looking into this conjectured relationship in order to put it on a more rigorous footing, but for good or bad I am going to run with it for the time being to see where it takes me.

The above equation suggests that C can be arbitrarily large. From this it follows that for these “large” computations a small P could only reify the computation after a very long time. However, the equation tells us that we can shorten computation times by increasing P; that is by increasing the complexity of the program itself. Thus whilst Wolfram’s principle of computational equivalence may be true in that any simple chaotic program may be used to describe an arbitrarily chosen computation, albeit in a prohibitively long time, complementarity means that it is also true that in order to achieve certain computations in realistic times prohibitively complex programs may be required. Thus in terms of achieving timely results there seems to be no upper limit on computational complexity and power; certain timely results will demand highly complex programs. This is the intent of my original statement: there are computations that are practically speaking unreachable, either that or they require enormous a-priori data resources.

If a physical theory works by using huge computational resources in terms of very large program strings and/or using very long generation times then humanly speaking it is of little computation utility to human beings. Fortunately for us the computational rules of the cosmic physical regime seem manageable both in terms of initial data and computation times. And yet “…it seems to me that there is a huge platonic realm out there of highly complex algorithms, whose rules are beyond our ken, algorithms whose computational resources are too great to be humanly manageable.” What if the randomness of quantum mechanics is absolute in that it is not the result of some underlying chaotic determinism like say decoherence but a brute given input to our cosmos? If that is the case then those random input configurations may be those that are practically speaking algorithmically unreachable. God knows.

Footnote:
* Obviously some nuancing of terms is needed here. Some "computations" appear to increase in computation time as input data length is increased; e.g. factoring a number
.

No comments:

Post a Comment