Here is the latest on my attempts to apply the notion of disorder to algorithms.
In 1987 I compiled a private paper on the subject of disorder. I defined randomness using its pattern frequency profile and related it to its overwhelming statistical weight, showing that the frequency profile of randomness has, not unsurprisingly, far more particular ways in which it can be contrived than any other kind of frequency profile. I never thought of this work as particularly groundbreaking but it really represents a personal endeavour to understand an important aspect of the world, and ever since this notion of randomness has been an anchor point in my thinking. For example, for a long while I have worked on a project that has reversed the modern treatment of randomness – rather than throw light on randomness using algorithmic information theory I have attempted to investigate algorithmics using this frequency profile notion of randomness.
In 1987 I compiled a private paper on the subject of disorder. I defined randomness using its pattern frequency profile and related it to its overwhelming statistical weight, showing that the frequency profile of randomness has, not unsurprisingly, far more particular ways in which it can be contrived than any other kind of frequency profile. I never thought of this work as particularly groundbreaking but it really represents a personal endeavour to understand an important aspect of the world, and ever since this notion of randomness has been an anchor point in my thinking. For example, for a long while I have worked on a project that has reversed the modern treatment of randomness – rather than throw light on randomness using algorithmic information theory I have attempted to investigate algorithmics using this frequency profile notion of randomness.
I have recently made some progress on this project, a project that constitutes project number 2 on my project list. I’m still very much at the hand waving stage, but this is how things look at the moment:
Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.
Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.
Now, if we started with a result string of Lr zeros we could convert it into a random sequence by traversing the network with a path string of minimum length in the order of Lp, where clearly Lp ~ Lr. This is the shortest possible path length for creating a random sequence. However, to achieve this result so quickly would require the path string itself to be random. But what if for some reason we could only traverse very ordered paths? How long would an ordered path string be for it to produce a complex disordered result? This is where it gets interesting because it looks as though the more ordered the path string is, the longer it has to be to generate randomness. In fact very ordered paths have to be very long indeed, perhaps exponentially long. i.e. Lp >>> Lr for ordered paths and disordered results. This seems to be my first useful conclusion. I am currently trying to find a firmer connection between the path strings and algorithms themselves.