But, and here’s the big but, a simple counting algorithm does not respond to the information in the current results string (the results string is the equivalent of the tape in a Turing machine). When an algorithm starts receiving feedback from its ‘tape’, then things change considerably – we then move into the realm of non-linearity.
Given the ‘feedback’ effect and using some assumptions about just what a Turing machine could at best achieve I have obtained a crude formula giving an upper limit on the speed with which a Turing machine could development a random sequence. A random sequence would require at least ‘t’ steps to develop where:
t ~ kDn/log(m)
and where ‘k’ is a constant whose factors I am still pondering, ‘D’ is the length of the sequence, ‘n’ is an arbitrarily chosen segment length whose value depends on the quality of randomness required, and ‘m’ is the number of Turing machine states. This formula is very rough, but it makes use of a frequency profile notion of randomness I have developed elsewhere. The thing to note is that the non-linearity of algorithmics (a consequence of using conditionals) means that the high complexity of randomness could conceivably be developed in linear time. This result was a bit of a surprise as I have never been able to disprove the notion that the high complexity of randomness requires exponential time to develop it – a notion not supported by the speed with which ‘random’ sequences can be generated in computers. Paradoxically, the high complexity of randomness (measured in terms of its frequency profile) may not be computationally complex. On the other hand it may be that what truly makes randomness computationally complex is that for high a quality randomness D and n are required to become ‘exponentially’ large in order to obtain a random sequence with a ‘good’ frequency profile.