This post concerns a little computational conundrum that I been pondering for some time. I now think I may have a handle on the solution.

Firstly some preamble.

Let’s assume that we can represent a general computation using a model similar to one I suggested in this post as follows:

Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.

Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.

The essential idea here was that algorithmic change involves a path through a network of states separated by some kind of increment; in this case a bit flip. Since I wrote the above I have stayed with the general idea of a network of binary patterns linked by some form of increment, with algorithmic change effectively being a route through this network. However, in the above model the patterns are networked together by bit flipping – this seems a fairly natural way of linking the binary patterns, but as I have thought about it I have come understand that the method of linking the patterns depends on the computing model used; in fact the computing model also seems to effect the set of patterns actually allowed and even just how the “cells” that host the binary bits of the pattern are sequenced. Once these network variables are specified it is only then that we can start considering just what the computing model thus defined can be programmed to do. In this network model of computation I envisage both the running of the computation engine, its program and its workings in memory to be uniquely represented by network patterns; thus part of a particular pattern will represent the computation engine’s state and its program.

So how does this pan out for say a Turing machine? The configurational state of the Turing machine’s tape in terms of allowed characters seems to have no limitation – all configurations of characters are allowed. But it seems that not every network transition is allowed. If we take into account the fact that we must represent the position of the machine on the tape as part of a network pattern, then it is clear that arbitrary pattern transitions can’t be made; a character cell on the output tape cannot be changed if the machine is not at that position. There will also be some limitations on the actual patterns allowed, for depending on how we are to code the machine’s programs, we will expect syntax limitations on these programs. So a Turing computing model puts some constraints on both the possible network configurations and the network transitions that can be carried out.

Another point to make here is that what classifies as an incremental change in one computer model may involve a whole sequence of changes in another. Compare for example a Turing model with a “counter program” computing model. The Turing model is fairly close to my bit flipping model; changes in the tape involve simple character swaps. However, in a counter program the variables are numbers and the increments are plus or minus 1. This means that the character patterns of two numbers separated by the increment of 1 can look very different; e.g. 9999 + 1 = 10000. Thus, in counter programs two patterns separated by an increment of 1 or -1 may have more than a one character difference.

So what is my problem and how does the forgoing preamble help?

The problem I have pondered for some time (for a few years actually!) arises from the fact that there exists a class of small/simple programs that can in relatively quick time generate disordered patterns. (Wolfram’s cellular automata pictures are an example). It follows therefore that there is a “fast time” map from members of this class of a small programs to members of the class of disordered patterns.* But the class of disordered patterns is far, far greater than the number of short programs. Hence we conclude that only a tiny subset of disordered patterns map to “fast time” small programs. Now here’s the problem: Why should some disordered patterns be so favoured? The link that a relatively small class of disordered patterns have to simple mathematics grates against my feeling that across the class of disordered patterns there will be symmetry and no disordered pattern should be specially favoured.

My current proposed solution to this intuitive problem is as follows. The computing model is a variable and hence it is very likely that different computing models will favour different sets of disordered patterns with a map to fast time generation algorithms. In fact my guess is that the computing model variable has such a large degree of freedom that it is possible to find a model that will generate any configuration in fast time given the right model. The computing model variable thus restores symmetry over the class of disordered configurations. But in order to cover the whole class of disordered configurations with fast time maps where does the enormous variability of the computing model come from? It seems to come from several sources: One, as we have seen above, is the way the computing model wires the network of increments. Another degree of freedom is found in the sequencing off the patterns in the network: A Turing machine, via its quasi continuous movements up and down a tape of cells, effectively imposes a particular order on the cells of the output. It is possible to imagine another Turing machine that has an entirely different cell ordering and thus it would access an entirely different set of disordered configurations in fast time. I suggest, then, that the possibilities of the computing model look to be enough to cover the whole class of disordered configurations with fast time maps, thus restoring the symmetry I expect.

* Footnote

The “Fast Time” stipulation is important. Some single simple programs can, if given enough time, eventually generate any selected pattern and thus on this unrestricted basis a simple program maps to all output configurations.

Firstly some preamble.

Let’s assume that we can represent a general computation using a model similar to one I suggested in this post as follows:

Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.

Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.

The essential idea here was that algorithmic change involves a path through a network of states separated by some kind of increment; in this case a bit flip. Since I wrote the above I have stayed with the general idea of a network of binary patterns linked by some form of increment, with algorithmic change effectively being a route through this network. However, in the above model the patterns are networked together by bit flipping – this seems a fairly natural way of linking the binary patterns, but as I have thought about it I have come understand that the method of linking the patterns depends on the computing model used; in fact the computing model also seems to effect the set of patterns actually allowed and even just how the “cells” that host the binary bits of the pattern are sequenced. Once these network variables are specified it is only then that we can start considering just what the computing model thus defined can be programmed to do. In this network model of computation I envisage both the running of the computation engine, its program and its workings in memory to be uniquely represented by network patterns; thus part of a particular pattern will represent the computation engine’s state and its program.

So how does this pan out for say a Turing machine? The configurational state of the Turing machine’s tape in terms of allowed characters seems to have no limitation – all configurations of characters are allowed. But it seems that not every network transition is allowed. If we take into account the fact that we must represent the position of the machine on the tape as part of a network pattern, then it is clear that arbitrary pattern transitions can’t be made; a character cell on the output tape cannot be changed if the machine is not at that position. There will also be some limitations on the actual patterns allowed, for depending on how we are to code the machine’s programs, we will expect syntax limitations on these programs. So a Turing computing model puts some constraints on both the possible network configurations and the network transitions that can be carried out.

Another point to make here is that what classifies as an incremental change in one computer model may involve a whole sequence of changes in another. Compare for example a Turing model with a “counter program” computing model. The Turing model is fairly close to my bit flipping model; changes in the tape involve simple character swaps. However, in a counter program the variables are numbers and the increments are plus or minus 1. This means that the character patterns of two numbers separated by the increment of 1 can look very different; e.g. 9999 + 1 = 10000. Thus, in counter programs two patterns separated by an increment of 1 or -1 may have more than a one character difference.

So what is my problem and how does the forgoing preamble help?

The problem I have pondered for some time (for a few years actually!) arises from the fact that there exists a class of small/simple programs that can in relatively quick time generate disordered patterns. (Wolfram’s cellular automata pictures are an example). It follows therefore that there is a “fast time” map from members of this class of a small programs to members of the class of disordered patterns.* But the class of disordered patterns is far, far greater than the number of short programs. Hence we conclude that only a tiny subset of disordered patterns map to “fast time” small programs. Now here’s the problem: Why should some disordered patterns be so favoured? The link that a relatively small class of disordered patterns have to simple mathematics grates against my feeling that across the class of disordered patterns there will be symmetry and no disordered pattern should be specially favoured.

My current proposed solution to this intuitive problem is as follows. The computing model is a variable and hence it is very likely that different computing models will favour different sets of disordered patterns with a map to fast time generation algorithms. In fact my guess is that the computing model variable has such a large degree of freedom that it is possible to find a model that will generate any configuration in fast time given the right model. The computing model variable thus restores symmetry over the class of disordered configurations. But in order to cover the whole class of disordered configurations with fast time maps where does the enormous variability of the computing model come from? It seems to come from several sources: One, as we have seen above, is the way the computing model wires the network of increments. Another degree of freedom is found in the sequencing off the patterns in the network: A Turing machine, via its quasi continuous movements up and down a tape of cells, effectively imposes a particular order on the cells of the output. It is possible to imagine another Turing machine that has an entirely different cell ordering and thus it would access an entirely different set of disordered configurations in fast time. I suggest, then, that the possibilities of the computing model look to be enough to cover the whole class of disordered configurations with fast time maps, thus restoring the symmetry I expect.

* Footnote

The “Fast Time” stipulation is important. Some single simple programs can, if given enough time, eventually generate any selected pattern and thus on this unrestricted basis a simple program maps to all output configurations.

## No comments:

Post a Comment