Pages

Wednesday, November 10, 2010

Self Organisation


According to my reading of Richard Johns, this complex fractal landscape can't happen

As promised in my last two blogs here is my post giving a more detailed look at Richard Johns’ paper on self organization. The central part of Johns’ paper is his “Limitative Theorem”, in which he states:

A large, maximally irregular object cannot appear by self organization is any dynamical system whose laws are local and invariant.

Understood in its most literal sense the above statement by Johns is false. But, to be fair, it does rather depend on how one interprets the terms in Johns’ formulation. Interpreting these terms using excluded middle “true or false” logic, then Johns’ central thesis is beguiling. However, if interpreted in the fuzzy terms of probabilities which deal with typical cases rather than special cases then Johns is seen to have point.

***

If you toss a coin many times you end up with a sequence of heads and tails that can be conveniently represented by a long binary pattern of 1s and 0s. This pattern of 1s and 0s can be broken down into a contiguous series of segments of any chosen length of, say, n tosses. The probability of finding a segment containing k 1s is then given by Bernoulli’s formula :


where p is the probability of heads being thrown and where:


A corollary of this formula is that segments with equal numbers of 1s have equal probability. For example with n = 10 a segmental configuration such as 1010101010 is as equally probable as say 111110000 or 1000101011.

Bernoulli’s formula is most likely to be thought of as device for statistical prediction in as much as it can be used to calculate the expectation frequencies of segmental configurations like 1000101011 or 10101010 etc. The reason why the formula works is because the number of possible sequences that can be constructed from 1 and 0s with segmental frequencies consistent with the formula is overwhelmingly large compared to the number of sequences that deviate from the Bernoulli expectation frequencies. Thus, given that a sequence is generated by a chance source then it is clear that a member taken from the class of sequences conforming to the Bernoulli expectation values is going to be a highly probable outcome.

But statistical prognostication is not the only way that Bernoulli’s formula can be used. It is also possible to use the formula as an indication of the configurational complexity of a sequence. To do this we take a given sequence of 1 and 0s and enumerate the frequencies of its segmental configurations. If the sequence so enumerated returns frequencies consistent with the Bernoulli values then that sequence is deemed to be maximally complex. If, given a set of enumerated segmental frequencies, the number of possible sequences consistent with that enumeration is represented by Z, then Z is maximised for a Bernoulli sequence. If, on the other hand, a sequence deviates from the Bernoulli expectation values, then correspondingly Z will deviate from a maximum value. The departure of Z from its maximum is a measure of the departure of the sequence from maximum complexity. I have dealt with these matters more rigorously elsewhere, but for the purposes of this article we can now get a handle on Richard Johns’ paper. Johns’ paper makes use of something he calls “irregularity”, a concept closely related, if not identical to the concept of configurational complexity that I have defined above: Johns breaks his patterns up into small chunks and then defines the concept of irregularity as the number of possible configurations consistent with the frequency distribution of the segmental configurations.

Johns’ paper is an exploration of the relationship between configurational complexity (or “irregularity” as he calls it) and computational complexity. Computational complexity, as Johns understands it, is measured in terms of the execution time needed to reach a pattern; in particular he is concerned with the time needed to generate configurationally complex patterns. Johns arrives at the conclusion - as stated by his Limitative Theorem above - that maximally irregular patterns cannot appear as a result of self organization. What he actually means by “cannot appear” is that maximally irregular patterns cannot be produced in what he calls a “reasonably short time”. Thus, according to Johns, complex irregular patterns are computationally complex – that is, they need large amounts of execution time.

A few moments reflection reveals that there is a correlation between computational complexity and configurational complexity as Johns submits. This correlation arises simply because of the sheer size of the class of configurationally complex patterns. The value of Z rises very steeply as the segmental frequencies move toward the Bernoulli values. In fact the value of Z far outstrips the number of steps that can be made in a realistic computation. Thus, if we imagine a computation taking place it is clear that given realistic execution times any computation, whether controlled by a deterministic algorithm or some aleatory process, is only going to be able to visit a relatively small number of configurations that make up the huge number Z. Thus, it is fairly easy to see that the chances of a computation generating an arbitrarily selected complex configuration in a realistic time is very small. This, then, is essentially Johns’ Limitative Theorem: An arbitrarily selected irregular or complex configuration is very probably going to have a high computational complexity.

But we must note here that “very probable” doesn’t rule out the possibility of improbable cases where self organization appears to take place in a realistic time. In fact at the start of his paper Johns declares that cases where self organization depends on starting from “fine tuned initial conditions” breaks the rules of his game and therefore doesn’t classify as self-organisation. The big problem with Johns’ formulation is over the meaning of his terms; in some people’s books an example of a complex pattern that makes a surprisingly quick appearance from particular initial conditions would still classify as self organization.

Johns’ concept of irregularity, as we have seen, is closely related to the kind of patterns generated by chance processes such as the tossing of a coin. Johns “irregular” patterns and those patterns produced by aleatory processes both have high values of Z. That is why I actually prefer to call them “disordered” patterns and refer to the quantity Z as “disorder”. Given this equivalence between Johns’ concept of irregularity and my concept of “disorder”, it is easy to think of examples where an irregular object is generated in a reasonably short time, apparently violating Johns Limitative Theorem: I am thinking of cases where elementary algorithms generate pseudo random sequences in polynomial execution times; such rapidly generated sequences show a high value of Z. But given that there is a certain amount of doubt about the way the terms of Johns’ Limitative Theorem should be interpreted it is difficult to know whether this example can be considered to violate his thesis. Is a blank memory and a simple algorithm to be considered as “fine tuned initial conditions”? In addition we know that when Johns says in his Limitative Theorem that self organization “cannot appear” he is not speaking in absolute terms but actually means that self-organization cannot appear in reasonable execution times. So perhaps we should rewrite Johns’ Limitative Theorem as follows:

A large, arbitrarily selected, maximally irregular object is unlikely to appear in reasonable time by self organization in any dynamical system whose laws are local and invariant.


This more measured form of Johns’ principle allows for the rare cases where self organization happens. For example, we know that some elementary algorithms can reach a limited subset of irregular patterns in a relatively short time, even though it is mathematically impossible for such algorithms to reach all irregular patterns in reasonable execution times. It is precisely this kind of exception to the rule that leaves the whole issue of self organisation an open question. Therefore as far as I’m concerned Johns’ paper has not eliminated evolutionary self organization from the inquiry. In fact it’s the same old, same old contention that I have raised time and time and again on this blog; complexity is not in general a conserved phenomenon; complexity can be generated from simple symmetrical starting conditions in polynomial times. I have never seen satisfactory cognizance taken of this fact in the anti-evolution community. In fact as I have suggested before, it is ironic that their very positing of a fabulous creative intelligence subtly subverts their anti-evolutionism; for if evolution is an example of self-organization arising from the right regime of generating laws then we are dealing with a very rare case – a case that the level of creative intelligence envisaged by the anti-evolutionists is presumably more than capable of contriving. With breath taking irony the anti-evilutionist's attempts to show the impossibility/improbability of evolution fail to connect the evolutionary conundrum with a concept that looms large in their culture and which stands impassively behind them: Namely, Divine Intelligence!

Some Notes on Complexity.
Although Johns rightly points out that biological structures such as DNA score high on the irregularity scale, irregularity (or “disorder” as I prefer to call it) does not capture an important characteristic of biological complexity and therefore fails to bring out the real reason for the relative computational complexity of these structures. Let me explain:

In one sense the class of disordered sequences is actually computationally elementary: For example imagine the problem being posed of generating any irregular/disordered pattern. The simplest way of solving this problem is to toss a coin: Using this method of “computation” we can arrive at an irregular/disordered pattern in as short a time as it takes to toss the required number of tosses. So let me suggest that a better measure of complexity than the bare value of Z is given by this equation:

Complexity, C ~ Log (Z/W),

where Z is the disorder of the case sort for and W is the size of the specified class whose members all classify as a “hit”. For example, if we specify that we require any maximally disordered pattern then W ~ Z and so C ~ 0 ; which expresses the fact that this particular problem is of low computational complexity, because simply flipping a coin is likely to produce what we want. But, on the other hand, if we specify that the pattern generated is to be a self perpetuating machine it is very likely that Z will be large and W small, thus returning Log(Z/W) ~ large. Hence, the point I am trying to make is that what makes living things computationally complex is not so much their configurational complexity as measured by Z, but rather a combination of configurational complexity and their rarity.

But even this measure of complexity may only be apposite for typical conditions. If there is such a thing as a class of self perpetuating complex patterns that can be arrived in polynomial time from some basic physical algorithms then it is likely that the number of basic physical algorithms is actually smaller than the configurational complexity of the patterns they produce; in which case the computational complexity of these patterns would be smaller than their configurational complexity! In fact, although we don't actually get a "free lunch" here, we at least get a fairly cheap lunch!

Whether or not actually existing biological structures are a rare case of evolutionary self organization, I nevertheless suspect that there are incredibly complex self perpetuating structures out there in platonic space that cannot be reached by evolutionary self organization. Just imagine what they must be like!

4 comments:

  1. If we think outside the box, there is amazing new world unfolding.

    ReplyDelete
  2. Hi Timothy,

    Thanks for your detailed treatment of my paper -- actually the best I have seen.

    The objection you raise is a good one, although I think it can be answered. (Jeffrey Shallit made the same point in an email to me.)

    There are simple (i.e. short) algorithms that can generate irregular strings, as I define them. You mention pseudo-random sequences, and I have previously thought about the digits of Pi. Hence these things are algorithmically simple, despite their irregularity.

    As I pointed out to Shallit, I claim that irregular objects are *dynamically* complex, not *algorithmically* complex. And, while it seems certain that dynamically simple objects are algorithmically simple (since dynamical systems can be emulated by computers) the converse is far less obvious. In other words, while the digits of Pi can be generated by a short program, it might not be produceable easily within a dynamical system, from a random initial state.

    If a converse theorem does hold, however (i.e. algorithmically simple objects are dynamically simple) then my arguments have gone wrong somewhere. But even in that case, self-organisation theories of evolution will be in a difficult position. For they will then be committed to the claim that living organisms are algorithmically (and dynamically) simple. In other words, living organisms are like Pi, merely *appearing* to be complex, while in fact being generated by a very short program. (Vastly shorter than their genomes, for example.)

    Following Richard Dawkins, who coined the term "designoid" for an apparently designed object, one might say that living organisms are "complexoid". While perhaps not obviously false, this view is likely to be very unattractive.

    ReplyDelete
  3. Thanks for the comment Richard.

    I am not sure I understand what you mean by producing an object *dynamically* as opposed to *algorithmically*. In your paper you seem to be using a cellular automata model to generate your objects. No problems with that except that as far as I understand cellular automata can be simulated algorithmically and therefore fall under the algorithmic category. Moreover, if Stephen Wolfram’s work is anything to go by (and some of my own work), cellular automata also generate complex “pi” like patterns pretty quickly – complex in the sense you have defined in your paper; that is they show a high irregularity or “disorder” as I call it.

    Now this doesn’t mean to say, of course, that life actually is the product of a clever dynamics, but given both your definition of irregularity and your use of an algorithmic cellular automata model, it seems we are back to square one – we haven’t yet succeeded in eliminating self organization from the enquiry. However, for reasons I have given in this blog entry I can see why a “soft focus” version of your limitative theorem applies. But given that the resource of an Intelligent designer is within my particular terms of reference, then it seems not outside the bounds of possibility that should the required fruitful self organizing regime actually be a mathematical possibility, then that intelligence is capable of contriving it.

    If (repeat if) this is the case then it is wrong to conclude that life must therefore be algorithmically simple for this reason: The space of all possible algorithms, though a lot smaller than the space of all possible configurational objects, is still a very very large space as far as we humans are concerned. I suspect (and this is only a hunch) that not any old algorithm has the right Self Organising properties required to generate living things - in which case selecting the right algorithm is then a computationally complex task; that is, life is not algorithmically simple in absolute terms.

    One more thing: Imagine that you were given the problem of PI in reverse; that is you were given the pattern of digits and yet had no clue as to what, if any, simple algorithm generated it. The hard problem then is to guess the algorithm – generating PI after you have found the algorithm is the easy problem. So to me life remains algorithmically complex even if it’s a product of SO.

    ReplyDelete
  4. Correction:

    The space of all possible algorithms,

    should have read

    The space of all possible small algorithms,

    ReplyDelete