Pages

Tuesday, September 14, 2010

Determinism and Indeterminism

Wolfram's Ideas Have Some Implications For Indeterminism

Somebody recently asked me to comment on the nature of determinism and indeterminism. Here is my answer:

Here is some waffle re. your questions:

Let’s assume every physical thing constitutes a pattern or structure of some kind and that the job of physics is to come up with a description of these objects using generating functions or algorithms. The functions/algorithms used to describe these physical patterns or structures constitute a theory. On this view, then, something like Newtonian theory is composed of a set of functions/algorithms that can used to describe the patterns of the world around us.

If we take on board this rather sketchy model of physics (which is in fact the way I personally understand physics at the moment) some things obviously follow. The class of compact or small algorithms/functions is relatively limited for the fairly obvious reason that small objects are by definition constructed with a relatively small number of parts and therefore these parts can only be juxtaposed in a limited number of ways. In contrast the physical objects theories attempt to describe may be huge in comparison. Thus for obvious combinatorial reasons the number of possible physical objects is far greater than the number of physical theories that can be constructed.

However, the ideas of Stephen Wolfram have some interesting implications here. He believes that the majority of relatively small algorithms can, if given enough time, compute everything that can be computed. If I understand Wolfram aright then this means that if given enough time any given (finite) physical pattern/structure can be described by a physical theory if “theory” in this context is understood as a kind of algorithm or function. Admittedly Wolfram’s ideas are conjectural but if they hold true it means that there are no finite objects out there that cannot be described using a theory. If a theory is given enough computation time it will eventually describe any stated physical system. The problem of the mismatch between the number of succinct theoretical systems and the huge number of physical objects that can be described is surmounted because we allow physical theories indefinite amounts of execution time to come up with the goods.

If Wolfram is right (and I have gut feeling he is), and if we define indeterminism as a physical system that cannot be described with an algorithm or function, then it seems that on this basis there are NO (finite) systems that are indeterminate – all systems can be calculated from a basic algorithm given enough execution time. Therefore, if we want a definition of determinism/indetermism that does justice to the fact that some systems seem less humanly tractable than others we need to change the definition of determinism.

In spite of the fact that (according to Wolfram) everything is in principle deterministic it is clearly not so from a practical human angle. Some objects are, from our perspective, “more deterministic” than others and therefore my proposal is that determinism and indeterminism are not a dichotomy but the two ends of a spectrum.

Some physical objects have a pattern that is highly ordered and we can quickly rumble the algorithm that can be used to calculate it with a relatively short execution time. We can then use the algorithm to “predict” between the dots of observation and therefore in this sense the pattern is deterministic. However, as the pattern becomes more and more complex, locating the algorithm that will describe it becomes more and more difficult because of the increasing difficulty of finding an algorithm that will describe the pattern in a convenient execution time: Most highly complex physical systems will only be generated after an impractical execution time and thus humanly speaking the system is indeterminist. Thus physical systems move from a deterministic classification to an increasingly indeterministic classification as they require longer and longer execution times to be generated by an algorithmic "theory". In this connection I suspect that the randomness of quantum mechanics is, from human point of view, practically indetermistic.

Finally to answer your question:

....is it true that for a model to *completely* predict everything within a system it would have to be at least as large (in terms of information content) as the system it was predicting, or is there a case in which a model could *completely* predict everything within a system but not be at least as large (in terms of information content)?

I’m not going to answer this question in terms of “information content” as we would get embroiled in the technical definition of information, but instead I’ll answer it from an intuitive and human stand point. One of the benefits of mathematical theorising from a human point of view is that in most cases the theory is not as large and complex as the system it is “predicting”. Think of the Mandelbrot set and compare it with the algorithm used to generate it: It is very easy to remember the algorithm but not a bit for bit map of the set itself. If physics is to the structures it predicts as the Mandelbrot algorithm is to the Mandelbrot set then in terms of bit for bit data storage the models of physics, like the Mandelbrot algorithm, are in human terms much smaller than the objects they compute. However, it seems that there are many objects out there, like history and God, that are not humanly theoretically “compressible” and therefore they will ever remain narrative intense subjects.

No comments:

Post a Comment