Friday, September 18, 2009

Cellular Automata Part 3

In part 2 of this series I got to the stage where I accepted (with some help from Paul Abbots) that Wolfram’s principle of computational equivalence was at least a plausible enough conjecture to run with. As I understand it Wolfram’s conjecture is different to the Church-Turing thesis in that it ascribes universality not to computational models but to a class of single simple computations; according to Wolfram simple computations that look chaotic in output are, in most cases, likely to have the property of universality in that they can, with the right initial conditions, emulate any computation. Wolfram’s conjecture does have a deep intuitive plausibility because truly chaotic computations are forever generating novelty of configuration and thus one might expect that given enough time to create a long history of computation at some point they will emulate any chosen computation.

However, when I first came across Wolfram’s principle of computational equivalence in his YouTube lecture, this is how I responded:

On the subject of computational equivalence Wolfram is less compelling. Simple processes can produce the high complexity of disorder, but simple processes, which are relatively small in number, merely sample a very small part of the huge class of complex and disordered configurations. The only way of reaching this huge class is to start with complexity. Accordingly I am not sure I know what Wolfram means when he talks about a “maxing out” of complexity. Complexity of process seems to have no upper limit. Perhaps I do not understand Wolfram aright but it seems to me that there is a huge platonic realm out there of highly complex algorithms, whose rules are beyond our ken, algorithms whose computational resources are too great to be humanly manageable.

In saying that there is a huge class of complex and disordered configurations that are unreachable without the resource of a-priori complexity am I in conflict with Wolfram’s view? Yes and no. ‘Yes’ because if Wolfram’s principle is correct then in principle any configuration can eventually be reached from a simple single “chaotic” computation. (So in absolute terms what I have said above is wrong) And yet ‘No’ because I really only had practical computation in mind. In a very effective sense there is class of a configurations that are unreachable and this for the following reason. In most cases the program controlling a computation is a very much smaller object than the pattern of the computation itself; the reified computation often describes a very large object in both time and space. This means that the number of possible computations, in terms of combinatorial possibilities open to this huge domain of possible patterns, is far greater than then number of possible “small and simple programs”. Thus although according to Wolfram a small “chaotic” program may in time reach all possible configurations it will not be able to reach all configurations in realistic time, simply because there are too many of them. For small programs running on a given model of computation there will be a large class of configurations out of range as far as realistic computation times are concerned.

In part 2 of this series I conjectured that there was a complementary relationship between program size and computation times: Shortening program length increases the time lower limit needed to achieve a particular computation, whereas increasing program length by adding data allows the time lower limit to be reduced. Mathematically:

program length + computation time >= a constant for the given computation*

Or in abbreviated form:

P + T >= C,*

…. where I have sharpened up the notion of my original “equation” by relating the sum of P and T via an inequality to a constant that is set by the computation in hand. I have been looking into this conjectured relationship in order to put it on a more rigorous footing, but for good or bad I am going to run with it for the time being to see where it takes me.

The above equation suggests that C can be arbitrarily large. From this it follows that for these “large” computations a small P could only reify the computation after a very long time. However, the equation tells us that we can shorten computation times by increasing P; that is by increasing the complexity of the program itself. Thus whilst Wolfram’s principle of computational equivalence may be true in that any simple chaotic program may be used to describe an arbitrarily chosen computation, albeit in a prohibitively long time, complementarity means that it is also true that in order to achieve certain computations in realistic times prohibitively complex programs may be required. Thus in terms of achieving timely results there seems to be no upper limit on computational complexity and power; certain timely results will demand highly complex programs. This is the intent of my original statement: there are computations that are practically speaking unreachable, either that or they require enormous a-priori data resources.

If a physical theory works by using huge computational resources in terms of very large program strings and/or using very long generation times then humanly speaking it is of little computation utility to human beings. Fortunately for us the computational rules of the cosmic physical regime seem manageable both in terms of initial data and computation times. And yet “…it seems to me that there is a huge platonic realm out there of highly complex algorithms, whose rules are beyond our ken, algorithms whose computational resources are too great to be humanly manageable.” What if the randomness of quantum mechanics is absolute in that it is not the result of some underlying chaotic determinism like say decoherence but a brute given input to our cosmos? If that is the case then those random input configurations may be those that are practically speaking algorithmically unreachable. God knows.

Footnote:
* Obviously some nuancing of terms is needed here. Some "computations" appear to increase in computation time as input data length is increased; e.g. factoring a number
.

Sunday, September 13, 2009

Bunk

I recently read the following on the means that illusionist Deren Brown used to predict lottery numbers:

Sat 12 Sep 7:25 AM. By Press Association Press Association Illusionist Derren Brown claimed he was able to predict the lottery numbers just by combining the random guesses of a panel of members of the public. On Wednesday 2.7 million viewers tuned in to Channel 4 to see Brown perform his latest trick. Broadcasting from a secret studio location, Brown told viewers he had written his predictions on a line of balls, which he then revealed after watching the draw live on BBC1. He said "that's a year of my life right there. I can't believe it", as he turned over the balls to reveal they were an exact match for the winning numbers: 2, 11, 23, 28, 35, 39. On Friday night he told viewers that he used "a powerful beautiful secret that can only be achieved when we all put our heads together". He gathered a panel of 24 people who wrote down their predictions after studying the last year's worth of numbers. Then they added up all the guesses for each ball and divided it by 24 to get the average guess. On the first go they only got one number right, on the second attempt they managed three and on the third they guessed four. By the time of last week's draw they had honed their technique to get six correct guesses. Brown claims that the predictions were correct because of the "wisdom of the crowd" theory which suggests that a large group of people making average guesses will come up with the correct figure as an average of all their attempts.


The above is, of course, all part of the act of obfuscation. The things one has to say (and do) to be an illusionist.

Wednesday, September 02, 2009

Intellectual Bankruptcy

PZ Myers and Larry Moran both post on a YouTube YEC video promoting the population growth argument for a young Earth. (See here and here). The YEC argument from population growth uses a crass form of uniformitarianism that assumes a simple population growth formula and then plugs into this formula a growth constant in order to come up with a four and half thousand year history of uniformly exponentiating population growth since Noah.

To be fair we should of course use the very methods of YEC’s science of negation to question the simplistic assumptions built into their calculations. This would no doubt lead us to consider just what affects the vagaries of food supply, predation, disease, and climate etc. might have on the fluctuating fortunes of a struggling small population.

But then this exercise has little to do with science; it is slick showmanship and sales talk directed at the converted in order to convince them that the 6000 year old Earth theory is in safe patriarchal hands thus promoting the sense of security needed to bolster fundamentalist flocking instincts.

But it cuts no ice with critics; Myers and Moran laugh the video to scorn. For them the whole thing is a joke and further evidence of the intellectually debauched state of Christianity, which in turn is further confirmation that Christians can make no claim to special revelation from a Divine being. But the trump card of YEC theorists is fideism: “For the preaching of YEC theory is foolishness to those that are being lost, but to us who are being saved it is the power of God”.


Stop Press 4/9/9
In this link PZ Myers notes the impenetrable fideist defensive barrier that the Mormons have thrown up; you have no chance of seeing the truth of Mormon history unless you believe Mormonism in first place – and that only comes to those with a sublime connection with the “Spirit of God”