Monday, January 28, 2008

The Intelligent Design Contention: Part4

Casey Luskin
In this web article Casey Luskin, Intelligent Design apologist, responds with a comprehensive rebuttal of America’s National Academy of Science’s negative assessment of ID. The article exploits the self-criticizing quotes of evolutionary theorists who candidly admit weaknesses and gaps in the theory in relation to a variety of outstanding problems: the inconsistencies in trying to build phylogenetic trees, the poor state of abiogenesis, problems in relating evolution to the fossil record, and the patchy evidence of hominid evolution. For me these problems are not unexpected given the nature of beast – evolution is one of those big historical theories dealing with a complex ontological category. My own gut feeling is that evolutionary theory, given what it is trying achieve, does a good job of achieving it – that of linking a diversity of observations into one theoretical framework. It will, of course, never be so sure footed as say celestial mechanics, or atomic theory, simply because it deals with such epistemologically intractable objects as deep time and, it hardly needs be said, the most complex objects we know of – living things. It is unfair, as does Luskin in his article, to make comparison with the much simpler objects that physics studies. Evolution is certainly not less well founded than some of the other epistemologically tricky things I believe in like, for example, historical narratives, middle of the road socialism, the constitutional monarchy, or theism. I take evolution and evolutionists seriously. It’s one thing to criticize evolution, but it’s quite another to attempt to advance an alternative theory that is as successful.

But I also take Luskin and his fellow ID proponents seriously and especially that key concept of ID, irreducible complexity. In this series of blogs I will be using my blog as a kind of sounding board to help develop my abstract ideas in relation to this key ID notion. In ID theory, if theory it is, the whole edifice of intelligent design stands or falls by the notion of irreducible complexity. Hence it is this concept that I want to focus my efforts on here.

In his article Luskin addresses the NAS treatment of irreducible complexity. He answers the sort of criticism of irreducible complexity that we find in the video of Ken Miller which I posted in my last entry on this subject:

Dembski …. recognizes that Darwinists wrongly characterizes irreducible complexity as focusing on the non-functionality of sub-parts. Conversely, pro-ID biochemist Michael Behe, who popularized the term “irreducible complexity,” properly tests it by assessing the plausibility of the entire functional system to assemble in a step-wise fashion, even if sub-parts can have functions outside of the final system. The “leap” required by going from one functional sub-part to the entire functional system is indicative of the degree of irreducible complexity in a system. Contrary to the NAS’s assertions, Behe never argued that irreducible complexity mandates that sub-parts can have no function outside of the final system.

Luskin also quotes Michael Behe the “Galileo” of ID who explains why the existence of the Type Three Secretory System, as a precursor of the E. Coli flagellum proves little:

At best the Type Three Secretory System represents one possible step in the indirect Darwinian evolution of the bacterial flagellum. But that still wouldn’t constitute a solution to the evolution of the bacterial flagellum. What’s needed is a complete evolutionary path and not merely a possible oasis along the way. To claim otherwise is like saying we can travel by foot from Los Angeles to Tokyo because we’ve discovered the Hawaiian Islands. Evolutionary biology needs to do better than that.


Our ignorance of the structure of morphospace is cutting both ways here: Behe is claiming that absence of evidence of possible evolutionary routes in morphospace is evidence of absence. On the other hand evolutionists claim that absence of evidence of these evolutionary paths is not evidence of absence. Of course, neither party has provided killer evidence either way. So who has the edge here? The claims of ID theorists and evolutionists are, logically speaking, in complementary opposition rather than symmetrical opposition. It is surely an irony that of the two sides ID, in an elementary Popperian sense, is ostensively making the more easily refutable claims: ID is stating a quasi-universal, namely that evolutionary paths to the working structures like the flagellum of E. coli don’t exist – all we need to do is find one route and the proposition is ‘falsified’. Evolutionists, on the hand are making an existential statement; they are claiming that the routes do exist: such statements can’t be falsified, but they can be ‘verified’ by just one case - if they find that one case their claim is ‘proved’. But both sides have their work cut out; the sheer size and complexity of morphospace makes it a little more difficult to investigate than the Pacific! Moreover, the ontological complexities of what is basically a historical subject will no doubt scupper any claims of either absolute falsification or verification and at best only evidence tipping the balance in one direction or the other is likely to be found.

However, having said that the existence of “islands” such as the TTSS do start to weaken the ID case: the ‘irreducible complexity’ of the flagellum structure is thus less irreducible than we were lead to believe; the TTSS sets a precedent that starts to erode blanket claims that evolutionary paths don’t exist. If Behe is so demanding as to require evolutionists to map out a full path, then it is only fair that evolutionists are as equally demanding and require ID theorists to show that such paths don’t exist. So given that neither party can easily provide a suite of evidence that comprehensively proves their case, we have to plump for partial evidence weighing the case in either direction. Thus, in this weaker sense the TTSS is evidence in favor of evolution, whatever Behe says.

Since the ID theorists have come up with no absolute proof that evolutionary routes to their ‘irreducibly complex’ structures don’t exist, they are forced to sit with a passive uneasiness hoping that such paths don’t pop out of the scientific woodwork. There is a marked difference in the strategic positions of the opposing sides: ID theorists are in passive defense, hoping that evolutionary paths through morphospace will not be found. Evolutionist, on the other hand have the initiative – they can imaginatively and proactively search for solutions – and who knows, they may yet come up with the goods. Frankly from a strategic point of view I would rather be an evolutionist.


One final question that Luskin addresses is this: Is ID science? Luskin says, Yes, of course it is! True, the notion of irreducible complexity as a bare idea is an intelligible notion that can be investigated empirically and logically, although as it deals with the structure of that complex beast we call morphospace, scientists have their work cut out. However, it is when ID theorists jump from irreducible complexity to the operation of intelligence, that the waters start to muddy. When we discover an archeological object that looks like an artifact the working assumption is that it is human intelligence has been at work. We can test this assumption in relation to the known traits of human beings. But ID does not identify the source of the artifaction and in spite of using the scientific sounding term ‘Intelligent Design’, that intelligence is a wild card: it is a naked intelligence of unknown powers and motives. This throws into sharp relief such questions as: What is the nature of intelligence? How does intelligence achieve what it does? Why is intelligence needed to create certain structures? It is these sorts of questions I hope to probe and make some progress with in this series of posts.

Monday, January 14, 2008

The Intelligent Design Contention: Part 3

Ken Miller
Here is a video of a lecture from the biologist Ken Miller (see picture) speaking out against Intelligent Design. Miller is a Catholic and evolutionist. I don’t fully agree with all that he says about the nature of science: I don’t think a clear cut distinction can be made between the ‘natural’ and ‘supernatural’, with science only dealing in the former; the ontological and epistemology categories in our world are too blended for us to sharply partition the world in this way. However, the philosophy of science isn’t the issue I wish to pursue here. In this connection I am more interested in Miller’s comments on the ‘science’ of ID. Miller uses his area of expertise in biology to effect when he successfully challenges the ID notion of irreducible complexity, especially in connection with the E. Coli flagellum, the immune system and the blood-clotting cascade. He shows the ID view that parts of these systems are of no use by themselves to be false. He also makes some very notable observations on the tactics being used by the ID community.

Miller makes a particularly compelling point (apparently one of the points that helped carry the Dover trial ruling against ID) when he suggests that the introduction of ID into classrooms under the rubric ‘Teach the Controvesy’ creates a false dichotomy between science and religion; That is, it frames the debate to look as though it is ‘naturalistic’ evolution vs. ‘supernatural’ creation by God with, of course, ID coming in on the side of God against those who, like myself, favour evolution. Ironically, many atheists would agree with this framing. This is typical, typical of so much evangelicalism – lines are drawn in order to define the ‘view of the righteous’ and wo betide you if you find yourself on the wrong side of the line. In this way spiritual duress is applied and this has the effect of pressuring believers to fall into line.

Not that Miller’s own denomination doesn’t have a history of pressuring believers to fall into line. Although I am very uncomfortable with centrally controlled religion, in this case Miller’s denomination is reaping a benefit: If the leadership just happen to take a reasonable view on an issue this can make itself felt all the way down the line (but, of course, the reverse happens as well!). In contrast evangelicalism is highly fragmented, with local fanatics often securing and commanding ignorant and gullible followings.

Friday, January 04, 2008

The Intelligent Design Contention: Part 2

Intelligent Design Examples
The de-facto symbol, rallying emblem, and prototype of the Intelligent Design School is the propulsion flagellum of the Bacteria Escherichia coli. A complex motor like construction delivers the revolutions to the flagellum. The ID case may derive some of its compelling quality from diagrammatic representations of the flagellum drive. These diagrams (see picture) show a structure with rotary components that has at least a superficial resemblance to a piece of human engineering. This resemblance may help elicit the intuitive gut reaction “An inventor must have designed and made it!”. Just looking at representations of the flagellum drive sets the mind up to be more susceptible to the ID contention that this structure of harmonized components must have come together in one grand slam creative act overseen by an Intelligent Designer. Clearly this designer knew all about wheel bearings long before humans were rolling stone monuments of Eygptian kings on logs. Inventors, particularly male inventors, have never been able to get the invention of the wheel out of their heads and when they see a wheel they know there must be a like mind around somewhere.
Other examples of biological engineering used to promote ID are the blood clotting cascade and the immune system (The arguments ‘for and against’ here can be locked onto using Google). These molecular level systems do not just depend on the production of a single protein but consist of a molecular 'industrial process' like production line of inter-dependent proteins that achieves the required result. Remove one protein from this production line and the functionality of the system is severely compromised. Thus, if vital biological systems like blood clotting and the immune system fail to function for want of a single component, then the organism hosting these substructures becomes unstable and dies. In the abstract the ID argument is this: how could all these parts have come together without intelligence? For clearly, ID theorists argue, they must have come together as a whole because removal of any one part leads to failure of function and death. These systems, they claim, cannot be made any simpler; that is, they cannot be reduced – they are ‘irreducibly complex’. The two ‘big name’ Christian theorists associated with the defense of the concept of irreducible complexity are Michael Behe and William Dembski. In some Christian circles they are megastars, Davids fighting courageously against the Goliaths of evolutionary theory.

I recently heard about another ID theorist whilst reading reading Sandwalk, the blog of atheist Larry Moran. Sandwalk reported (critically, of course) on the work of Canadian ID theorist Kirk Durston who is researching proteins, the active chemical ‘bricks’ of living things. To carry out their function the long molecular strands comprising protein molecules must be folded into appropriate shapes. According to Durston there comes a point when incremental changes in the molecular sequence of proteins completely disrupts their folding, thereby making them non-functioning. Durston is claiming that if changes in the molecular structure of proteins go beyond a certain magnitude threshold they cannot do their job. Once again the ID case rests on the difficulty of conceiving how certain biological structures could have come about except as complete up and running systems. The concept of irreducible complexity does not recognize half-measure structures – structures either work on they don’t. Biological solutions, it is claimed, have no sense of nearness or vicinity attached to them – they have to be either bang on target or they are a complete miss.

The ID vs. Darwin debate usually centers round specific organic examples like the prototypes I have given above. There seems to be a reason for this example-by-example treatment. Morphospace is a colossal and hugely complex platonic construction, highly inhomogeneous in its possibilities, and embracing objects of different types and levels – from atomic configurations that make up proteins, through the molecular reactions of protein production lines, to the micro engineering of E. Coli. One could of course, introduce even higher types – e.g. fully blown organisms or an ant’s nest (which is effectively a structure made of many individual organisms). Another tricky facet of morphospace is that environment has a critical bearing on the stability of structures; a structure that is stable in one environment might be unstable in another. Also, a structure may effect or become part of an environment thus giving rise to the non-linear effects of feedback. Moreover, strictly speaking mophospace doesn’t just include biological structures, but just about everything that we can conceive of, and more, that can be made from atomic matter; this even includes human artifacts like a bar of soap, a house, Lego, a jet fighter, a computer, or a Von-Neumann machine. The class of objects covered by morphospace are so varied in typing and level, with so many unknown degrees of freedom, so open ended in functional possibilities that general analytical treatment of this ‘space’ seems to be beyond the wit of man and hence the need to dwell on the specific rather than the general.
.
But in spite of all this ID theorists are committed to the notion that in critical biological regions morphospace possesses a feature that is a barrier to evolution, or at least in the examples they constant hark back to: they see these example biological structures standing isolated in a kind of design vacuum; that is, they are not surrounded by lower ‘marks’ or similar structures that could be part of an equally stable nexus. Thus, according to ID theory they have no stable neighboring structures that evolutionary gradualism could have passed through en route to the ‘final design’. ID theory therefore swings on the assumed disconnectedness of the regions of structural stability in morphospace. This is the rather brave and quite possibly wrong assumption on which ID theory rests, but it seems that the conceptual intractability of morhpospace, so vast in its ramifying possibilities and typing, makes it difficult for evolutionists to refute this assumption with one-liners. The result is much frustration, annoyance and abrasive dialogue.

Monday, December 31, 2007

Fireside Problem

Over the Christmas period I have been grappling with a problem that has been hanging around with me for some time.
Now, it is possible to generate highly disordered sequences of 1s and 0s using an algorithm. An algorithm is also expressible as a sequence of 1s and 0s. However, whereas disordered sequences may be long – say of length Lr, the length of the algorithm sequence generating it, Lp, may be short. That is Lr is less than Lp. The number of possible algorithm sequences will be 2^Lr. The number of possible disordered sequences will be nearly equal to 2^Lp. So, because 2^Lp is much less than 2^Lr, then it is clear that short algorithms can only generate a very limited number of all the possible disordered sequences. For me, this result has a counter intuitive consequence: It suggests that there is a small class of disordered sequences that have a special status - namely the class of disordered sequences that can be generated algorithmically. But why should a particular class of disordered sequences be so mathematically favoured when in one sense every disordered sequence is like every other disordered sequence in that they all have the same statistical profile? My intuitions suggest that all disordered sequences are in some sense mathematically equal, and yet it seems that algorithms confer a special status on a small class of these sequences.
I think I now know where the answer to this intuitive contradiction lies. To answer it we have to go back to the network view of algorithmic change. If we take a computer like a Turing machine then it seems that it wires the class of all binary sequences into a network in a particular way, and it is the bias introduced by the network wiring that leads to certain disordered configurations being apparently favored. It is possible to wire the network together in other ways that would favour another class of disordered sequences. In short the determining variable isn’t the algorithm, but the network wring, which is a function of the computing model being used. It is the computing model inherent in the network wiring that has as much degree of freedom as does a sequence as long as Lr. Thus, in as much as any disordered sequence can have an algorithmically favoured position depending on the network wiring used by the computing model, then in that sense no disordered sequence is absolutely favoured over any other.

Well, that’ll have to do for now; I suppose I had better get back to the Intelligent Design Contention.

Wednesday, December 12, 2007

The Intelligent Design Contention: Part 1

The Contention
For some months now I have been reading Sandwalk, the blog of atheist Larry Moran, or as he dubs himself ‘A Skeptical Biochemist’: (He can’t be that skeptical otherwise he would apply his skepticism to atheism and graduate from ‘skeptic’ to ‘doubter’.) One reason for going to his blog was to help me get a handle on the evolution versus Intelligent Design (ID) debate. The ID case hinges on a central issue that I describe in this post.

The ID contention is, in the abstract, this: If we take any complex organized system consisting of a set of mutually harmonized parts that by virtue of that harmony forms a stable system, it seems (and I stress seems) that any small changes in the system severely comprises the stability of that system. If these small changes lead to a break down in stability how could the system have evolved given that evolution is a process requiring that such systems be arrived at by a series of incremental changes?

Complex organized systems of mutually harmonized components are termed ‘Irreducibly Complex’ by ID theorists. The term ‘irreducible’ in this context refers, I assume, to the fact that apparently any attempt to make the system incrementally simpler by say removing or changing a component, results in severe malfunction and in turn this jeopardizes the stability of the system. If the apparent import of this is followed through then it follows that there are no ‘stable exit routes’ by which the system can be simplified without compromising stability. If there are no ‘stable exit routes’ then there are no ‘stable entry routes’ by which an evolutionary process can ‘enter’.

Mathematically expressed:

Stable incremental routes out = stable incremental routes in = ZERO.

In the ID view many biological structures stand in unreachable isolation, surrounded by a barrier of evolutionary 'non-computability'. Having believed they have got to this point ID theorists are then inclined to make a three fold leap in their reasoning: 1) They reason that at least some aspects of complex stable systems of mutually harmonized parts have to be contrived all of piece; that is, in one fell swoop as a fait accompli. 2) That the only agency capable of creating these designs in such a manner requires intelligence as a ‘given’. 3) That this intelligence is to be identified with God.

I get a bad feeling about all this. Once again I suspect that evangelicalism is unrinating up the wrong lamp post. Although the spiritual attitudes of the ID theorists look a lot better than some of the redneck Young Earth Creationists I still feel very uneasy. So much that one is supposed to accept under the umbrella of evangelicalism is often administered with subtle and sometimes not so subtle hints that one is engaged in spiritual compromise if one doesn’t accept what is being offered. I hope that at least ID theory will not become bound up with those who apply spiritual duress to doubters.

Wednesday, November 14, 2007

Generating Randomness

I have been struggling to relate algorithmics to the notion of disorder. In my last post on this subject I suggested that an algorithm that cuts a highly ordered path through a network of incremental configuration changes in order to eventually generate randomness would require a ‘very large’ number of steps in order to get through the network. This result still stands, and it probably applies to a simple algorithm like the counting algorithm, which can be modeled using simple recursive searching of a binary tree network.

But, and here’s the big but, a simple counting algorithm does not respond to the information in the current results string (the results string is the equivalent of the tape in a Turing machine). When an algorithm starts receiving feedback from its ‘tape’, then things change considerably – we then move into the realm of non-linearity.

Given the ‘feedback’ effect and using some assumptions about just what a Turing machine could at best achieve I have obtained a crude formula giving an upper limit on the speed with which a Turing machine could development a random sequence. A random sequence would require at least ‘t’ steps to develop where:

t ~ kDn/log(m)
and where ‘k’ is a constant whose factors I am still pondering, ‘D’ is the length of the sequence, ‘n’ is an arbitrarily chosen segment length whose value depends on the quality of randomness required, and ‘m’ is the number of Turing machine states. This formula is very rough, but it makes use of a frequency profile notion of randomness I have developed elsewhere. The thing to note is that the non-linearity of algorithmics (a consequence of using conditionals) means that the high complexity of randomness could conceivably be developed in linear time. This result was a bit of a surprise as I have never been able to disprove the notion that the high complexity of randomness requires exponential time to develop it – a notion not supported by the speed with which ‘random’ sequences can be generated in computers. Paradoxically, the high complexity of randomness (measured in terms of its frequency profile) may not be computationally complex. On the other hand it may be that what truly makes randomness computationally complex is that for high a quality randomness D and n are required to become ‘exponentially’ large in order to obtain a random sequence with a ‘good’ frequency profile.

Thursday, September 27, 2007

The Virtual World

Why don’t I accept the multiverse theory, I have to ask myself. One reason is that I interpret quantum envelopes to be what they appear to be: namely, as objects very closely analogous to the guassian envelopes of random walk. Guassian random walk envelopes are not naturally interpreted as a product of an ensemble of bifurcating and multiplying particles (although this is, admittedly, a possible interpretation) but rather a measure of information about a single particle. ‘Collapses’ of the guassian envelope are brought about by changes in information on the whereabouts of the particle. I see parallels between this notion of ‘collapse’ and quantum wave collapse. However, I don’t accept the Copenhagen view that sudden jumps in the “state vector” are conditioned by the presence of a conscious observer. My guess is that the presence of matter, whether in the form of a human observer or other material configurations (such as a measuring device) are capable of bringing about these discontinuous jumps. (Note to self: The probability of a switch from state w) to state v) is given by (wv) and this expression looks suspiciously like an ‘intersection’ probability.)

The foregoing also explains why I’m loath to accept the decoherence theory of measurement: this is another theory which dispenses with literal collapses because it suggests that they are only an apparent phenomenon in as much as they are an artifact of the complex wave interactions of macroscopic matter. Once again this seems to me to ignore the big hint provided by the parallels with random walk. The latter lead me to view wave function ‘collapse’ as something closely analogous to the changes of information which take place when one locates an object; the envelopes of random walk can change discontinuously in a way that is not subject to the physical strictures on the speed of light and likewise for quantum envelopes. My guess is that the ontology of the universe is not one of literal particles, but is rather an informational facade about virtual particles; those virtual particles can’t exceed the speed of light, but changes in the informational envelopes in which the virtual particles are embedded are not subject to limitations on the speed of light.

Monday, September 24, 2007

Well Versed

I have been delving into David Deutsch’s work on Parallel universes. Go here, to get some insight into Deutsch’s recent papers on the subject. I’m not familiar enough with the formalism of quantum computation to be able to follow Deutsch’s papers without a lot more study. However, some salient points arise. In this paper dated April 2001 and entitled “The Structure of the Multiverse” Deutsch says:
“.. the Hilbert space structure of quantum states provides an infinity of ways of slicing up the multiverse into ‘universes’ each way corresponding to a choice of basis. This is reminiscent of the infinity of ways in which one can slice (‘foliate’) a spacetime into spacelike hypersurfaces in the general theory of relativity. Given such a foliation, the theory partitions physical quantities into those ‘within’ each of the hypersurfaces and those that relate hypersurfaces to each other. In this paper I shall sketch a somewhat analogous theory for a model of the multiverse”
That is, as far as I understand it, Deutsch is following the procedure I mentioned in my last blog entry – he envisages the relationships of the multiple universes similar to the way in which we envisage the relationships of past, present and future.

On the subject of non-locality Deutsch in
this paper on quantum entanglement, states:
“All information in quantum systems is, notwithstanding Bell’s theorem, localized. Measuring or otherwise interacting with a quantum system S has no effect on distant systems from which S is dynamically isolated, even if they are entangled with S. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that under some circumstances (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) quantum information is transmitted through ‘classical’ (i.e. decoherent) channels.”
Deutsch is attacking the non-local interpretation of certain quantum experiments. In
this paper David Wallace defends Deutsch and indicates the controversy surrounding Deutsch’s position and its dependence on the multiverse contention. In the abstract we read:

It is argued that Deutsch’s proof must be understood in the explicit context of the Everett interpretation, and that in this context it essentially succeeds. Some comments are made about the criticism of Deutsch’s proof by Barnum, Caves, Finkelstein, Fuchs and Schack; it is argued that the flaw they point out in the proof does not apply if the Everett interpretation is assumed. "

And Wallace goes on to say:

“…it is rather surprising how little attention his (Deutsch’s) work has received in the foundational community, though one reason may be that it is very unclear from his paper that the Everett interpretation is assumed from the start. If it is tacitly assumed that his work refers instead to some orthodox collapse theory, then it is easy to see that the proof is suspect… Their attack on Deutsch’s paper seems to have been influential in the community; however, it is at best questionable whether or not it is valid when Everettian assumptions are made explicit.”

The Everett interpretation equates to the multiverse view of quantum mechanics. Deutsch’s interpretation of QM is contentious. It seems that theorists are between a rock and a hard place: on the one hand is non-locality and absolute randomness and on the other is an extravagant ontology of a universe bifurcating everywhere and at all times. It is perhaps NOT surprising that Deutsch’s paper received little attention. Theoretical Physics is starting to give theorists that “stick in the gullet” feel and that’s even without mentioning String Theory!

Saturday, September 22, 2007

Quantum Physics: End of Story?

News has just reached me via that auspicious source of scientific information, Norwich’s Eastern Daily Press (20 September) of a mathematical break through in quantum physics at Oxford University. Described as “one of the most important developments in the history of science” my assessment of the report is that multiverse theory has been used to derive and/or explain quantum physics.

The are two things that have bugged scientists about Quantum Physics since it was developed in the first half of the twentieth century; firstly its indeterminism – it seemed to introduce an absolute randomness in physics that upset the classical mentality of many physicists including Einstein: “God doesn’t play dice with the universe”. The second problem which, in fact is related to this indeterminism, is that Quantum Theory suggests that when these apparently probabilistic events do occur distant parts of the universe hosting the envelop of probability for these events, must instantaneously cooperate by giving up their envelope. This apparent instantaneous communication between distant parts of the cosmos demanding faster than light signaling also worried Einstein and other physicists.

Multiverse theory holds out the promise of reestablishing a classical physical regime of local and deterministic physics, although at the cost of positing the rather exotic idea of universes parallel to our own. It achieves this reinstatement, I guess, by a device we are, in fact, all familiar with. If we select, isolate and examine a particular instant in time in our own world we effectively cut it of from its past (and future). Cut adrift from the past much about that instant fails to make sense and throws up two conundrums analogous to the quantum enigmas I mentioned above; Firstly there will be random patterns like the distribution of stars which just seem to be there, when in fact an historical understanding of star movement under gravity gives some insight into that distribution. Secondly, widely separated items, will seem inexplicably related – like for example two books that have identical content. By adding the time dimension to our arbitrary time slice the otherwise inexplicable starts to make sense. My guess is that by adding the extra dimensions of the multiverse a similar explanatory contextualisation has finally – and presumably tentatively - been achieved with the latest multiverse theory.

Not surprisingly the latest discovery looks as though it has come out of the David Deutsch stable. He has always been a great advocate of the multiverse. By eliminating absolute randomness and non-locality multiverse theory has the potential to close the system and tie up all the lose ends. Needless to say all this is likely to proceed against a background of ulterior motivations and may well be hotly contended, not least the contention that Deutsch has made the greatest discovery of all time!
Postscript:
1. The tying of all loose ends is only apparent; all finite human knowledge can only flower out of an irreducible kernel of fact.
2. Multiverse theory, unlike the Copenhagen interpretation of Quantum Mechanics, suggests that quantum envelopes do not collapse at all, but always remain available for interference. Hence it should in principle be possible to detect the difference between these two versions of Quantum Theory experimentally.

Thursday, August 30, 2007

Pilgrim's Project Progress on Path Strings

Here is the latest on my attempts to apply the notion of disorder to algorithms.

In 1987 I compiled a private paper on the subject of disorder. I defined randomness using its pattern frequency profile and related it to its overwhelming statistical weight, showing that the frequency profile of randomness has, not unsurprisingly, far more particular ways in which it can be contrived than any other kind of frequency profile. I never thought of this work as particularly groundbreaking but it really represents a personal endeavour to understand an important aspect of the world, and ever since this notion of randomness has been an anchor point in my thinking. For example, for a long while I have worked on a project that has reversed the modern treatment of randomness – rather than throw light on randomness using algorithmic information theory I have attempted to investigate algorithmics using this frequency profile notion of randomness.


I have recently made some progress on this project, a project that constitutes project number 2 on my project list. I’m still very much at the hand waving stage, but this is how things look at the moment:

Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.

Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.

Now, if we started with a result string of Lr zeros we could convert it into a random sequence by traversing the network with a path string of minimum length in the order of Lp, where clearly Lp ~ Lr. This is the shortest possible path length for creating a random sequence. However, to achieve this result so quickly would require the path string itself to be random. But what if for some reason we could only traverse very ordered paths? How long would an ordered path string be for it to produce a complex disordered result? This is where it gets interesting because it looks as though the more ordered the path string is, the longer it has to be to generate randomness. In fact very ordered paths have to be very long indeed, perhaps exponentially long. i.e. Lp >>> Lr for ordered paths and disordered results. This seems to be my first useful conclusion. I am currently trying to find a firmer connection between the path strings and algorithms themselves.

Sunday, July 15, 2007

Mathematical Politics: Part 10

The Irrational Faith in Emergence
Marx did at least get one thing right; he understood the capacity for a laissez faire society to exploit and for the poor to fall by the wayside, unthought of and neglected. In the laissez faire society individuals attempt to solve their immediate problems of wealth creation and distribution with little regard for the overall effect. Adam Smith’s conjecture was that as people serve themselves, optimal wealth creation and distribution will come out of the mathematical wash. But is this generally true? The laissez faire system is, after all, just that; a system, a system with no compassion and without heart. As each part in that systems looks to its own affairs no one is checking to see if some people are falling into poverty. And they do - in fact one expects it from complex systems theory itself. Like all systems the absolutely laissez faire economy is subject to chaotic fluctuations – stock market crashes, and swings in inflation and employment. These fluctuations are exacerbated by the constant perturbations of myriad factors, especially the moving goal posts of technological innovation. Moreover, the “rich get richer” effect is also another well-known effect one finds in complex systems - it is a consequence of some very general mathematics predicting an inequitable power law distribution of wealth.

In short, economies need governing; that is they need a governmental referee to look out for fouls, exploitation and the inequalities that laissez faire so easily generates. However this is not a call for half-baked notions of government advocated by Marxists. Marx may have got the diagnosis at least partly right, but his medicine was poison. Slogans urging the workers to take control of the means of production may have a pre-revolutionary “the grass in greener on the other side” appeal, but slogans aren’t sufficient to build complex democratic government. Marx hoped that somehow, after the overthrow of the owning classes, the details of implementing “worker control” would sort themselves out. This hope was based on the supposition that a post-revolutionary society would consist of one class only, the working class, and since it is the clash of class interests the are at the root of conflict, a post-revolutionary society would, as matter of course, truly serve worker interests. Ironically, the assumption that humans are capable of both rationally perceiving and serving their interests is at the heart of Marxist theory as much as it is at the heart of laissez faire capitalism.

In Marx the details of the post-revolutionary society are sparse. In my mixing with Marxists I would often hear of half-baked ideas about some initial post revolutionary government that would act as a forum served by representatives from local worker soviets. Because the post-revolutionary society is supposed to be a “one class society” it is concluded that there will be no conflicts of interest and therefore only a one party government will be needed to represent the interests of a single class. This government, the “dictatorship of the proletariat”, would hold all the cards of power: the media, the means of production, the police and the army. But with power concentrated in one governing party, conditions would be ripe for two classes. The bureaucracy of the one party command economy would, of course, become populated by a Marxist elite who would not only relate to the means of production in an entirely different way to the masses, but would also be strategically placed to abuse their power. Human nature, a subject about which Marx had so little say, would tempt the Marxist elite to exploit the potential power abuses of a one the party context. Marxists sometimes claim that there is no need for police and army in a one party paradise, but for the one-party governing class there is even more need for army and police in order to protect their exclusive hold on power. Moreover, the local soviets would provide the means of infiltrating and controlling the working classes via system of informers and intimidators who would no doubt masquerade as the representatives the working class.

It is ironic that both laissez faire capitalists and Marxists have faith in the power of a kind of “emergence” to work its magic. Both believe that once certain antecedent conditions are realized we are then on the road to a quasi-social paradisr. For the laissez faire capitalist the essential precursor is a free economy. For the Marxist the overthrow of the owning classes is the required precursor that once achieved will allow all else to fall into place. There is a parallel here with the school of artificial intelligence that believes consciousness is just a matter of getting the formal structures of cognition right: Once you do this, it is claimed, regardless of the medium on which those formal structures are reified, conscious cognition will just “emerge”. Get the precursors right and the rest will just happen, and you needn’t even think about it; the thing you are looking for will just ‘emerge’.

Rubbish.

I have had enough of this Mathematical Politics business for a bit, so I think I will leave it there for the moment.

Sunday, June 24, 2007

Mathematical Politics: Part 9

Games Theory Breaks DownMuch of the theory behind the 1980s free market revolution depended on the notion of human beings competently making ‘rational’ and ‘selfish’ decisions in favour of their own socio-economic well being. But there is rationality and rationality and there is selfishness and selfishness. It is clear that human beings are not just motivated by the desires guiding Adam Smith’s invisible hand. If the cognizance of politics fails on this point then those other motivations, some of them like the forbidden planet’s monsters from the id, will one day pounce from the shadows and surprise us. And the consequences can be very grave indeed.

The story of the Waco siege gives us insight into some of the more perverse perceptions and motives lurking in the human psyche, which if summoned have priority over the need to maximize one’s socio-economic status or even the need to self preserve. The perverse altruism of the Waco cult members was neither accounted for nor understood by the authorities dealing with the siege. No doubt the latter, with all good intensions, wanted to end the siege without bloodshed, but they appeared to be using the cold war model. They tried very hard to offer incentives to get the cult members to defect, using both the carrot (safety) and the stick (cutting off supplies and making life generally uncomfortable with psychological pressures). All to no avail. Even when the senses of the cult members were assaulted with tear gas and their lives threatened by fire they did not budge from the cult’s compound. The authorities reckoned without the cult’s perverse loyalty to David Koresh who by this time was claiming Son of God status, and was fathering ‘Children of God’ through the cult’s women. That’s nothing new either: Akhenaten, 3500 years before Koresh, thought and did the same, as have so many other deluded religious leaders. The irony of it! The socio-biologists have a field day on this point!

If anything the measures taken by the authorities stiffened the resolve of the cult members who saw in these attempts the very thing Koresh warned them of. Koresh had succeeded in reinventing, once again, that well known cognitive virus that ensures that its hosts interpret any attempt to get rid of it as a sure sign of the presence of Satan. Hence, all attempts to dislodge the virus have the very opposite effect and simply embed it more deeply in the psyche. Thus, like the efforts to wriggle free from a knotted tangle or the struggles of a prey to escape the backward pointing teeth of a predator, the bid from freedom by the most direct path only helps to consolidate the entrapment. The only way to untie a difficult knot is to slowly and patiently unpick it bit by bit.


Cold war games theory failed at Waco on at least two counts: Firstly the rationality of cult members was based on a false view of reality – they saw the authorities, a-priori, as the agent of Satan and Koresh as God’s savoir. Secondly the selfishness of the cult members was not that of looking after themselves. The greater selfishness was embodied in a loyalty to Koresh and his viral teaching. In the face of this the incentives and disincentives offered by the authorities were worse than useless. It never occurred to them that here were a group of people that were prepared to go for a “Darwin award”. (http://www.darwinawards.com/)

As our society faces other issues where religion looms large, such as Iraq and Islamic terrorism, we may find that Smith’s invisible hand provides no solution.
To be continued....

Wednesday, June 13, 2007

Mathematical Politics: Part 8

Complex Adaptive Systems
The Santa Fe institute is an affiliation of largely male academics seeking to the spread the theoretical net as widely as possible - especially into the domain of human society and complex systems in general. The theoretical holy grail is to write equations encompassing all that goes on under the sun and thus arrive at a preordained order which once captured means that theoreticians can retire declaring their work to be done. The universe is then a museum piece embodied in a few equations we can muse over, knowing that if we crank their handles they will churn out the answers. They will thereby encode all secrets and mysteries, thus making them no longer secrets and mysteries.


Thank God that’s not true. Even granted that physics contains catchall equations, those equations are very general and not specific. Moreover, our current physics, with its appeal to the absolute randomness of quantum fluctuation suggests that endless novelty is encoded into physical processes. As Sir John Polkinghorne points out in his book “Exploring Reality” the chaotic tumbling of the asteroid Hyperion is maintained by the underlying perturbations of quantum fluctuations. The motion of Hyperion is forever novel.
****
John Holland is and was a pioneer in the field of genetic algorithms. His work (amongst that of others) has revealed a close connection between learning systems and evolution. Both processes ring the changes and lock in successful dynamic structures when they find them. These structures select themselves because they have the adaptable qualities needed for self-maintenance in the face of the buffetings of a world in restless change. In evolution those structures are phenotypes adapted to their ecological niche; in learning systems they are algorithms encoding successful models of the world thereby allowing their host organism to anticipate aspects of that world.

In his book “Complexity”, Mitchell Waldrop tells of John Holland’s lecture to the Santa Fe institute. He describes how the theoreticians listening to Holland’s lecture were gob smacked as Holland delivered a home truth – there aren’t any final equations (apart, perhaps from some very general physical constraints) because reality is exploring the huge space of possibility and is therefore delivering endless novelty. That endless novelty can’t be captured in a specific way in some catchall theory. In fact there is really only one thing that can cope with it – learning systems like human beings – or as Holland calls them “complex adaptive systems”. These are systems that are themselves so complex that they have the potential to generate an endless novelty, a novelty that matches or perhaps even exceeds their surroundings. Thus, these systems are either capable of anticipating environmental novelty or else at least able learn from it when it crops up. There are no systems of equations capturing everything there is to know about complex adaptive systems or the environments they are matched to cope with. Therefore it follows that apart from God Himself there is only one system with a chance of understanding something as complex as human beings with all their chaotic foibles, and that is another human being.

To be continued.....

Thursday, May 24, 2007

Mathematical Politics: Part 7

Expecting the Unexpected.
The “super complexity” of human beings means that they are capable of throwing up unexpected “anomalies” and by “anomalies” I don’t mean phenomena that are somehow absolutely strange, but only something not covered by our theoretical constructions. Just when you think you have trapped human behavior in an equation, out pops something not accounted for. These anomalies strike unexpectedly and expose the limits of one’s analytical imagination. They can neither be treated statistically, because they are too few of them, or analytically because the underlying matrix from which they are sourced defies simple analytical treatment.

Take the example I have already given of the supermarket check out system. This system can, for most of the time, be treated successfully using a combination of statistics, queuing theory and the assumption that shoppers are “rational and selfish” enough will look after the load-balancing problem. But there is rationality and rationality. For example, if there is a very popular till operator who spreads useful local gossip or who is simply pleasant company one might find that this operator’s queue starts to lengthen unexpectedly. The simplistic notions of self-serving and ‘rationality’ breaks down. Clearly in such a situation they is a much more subtle rationality being served. What makes it so difficult to account for is that it taps into a social context that goes far beyond what is going on in the supermarket queue. To prevent these wild cards impairing the function of the check out system (such as disproportionately long queues causing blockages) the intervention of some kind of managerial control may, from time to time, be needed.

In short, laissez faire works for some of the people for some of the time, but not for all the people all of the time.

To be continued.....

Sunday, May 13, 2007

Mathematical Politics: Part 6

Mathematical IntractabilityRandomness is a complexity upper limit – size for size nothing can be more complex than a random distribution generated by, say, the tosses of a coin. A sufficiently large random distribution configurationally embeds everything there possibly could be. And yet in spite of this complexity, it is a paradox that at the statistical level randomness is very predictable: for example, the frequency of sixes thrown by a die during a thousand throws can be predicted with high probability. In this sense randomness is as predictable as those relatively simple highly organized physical systems like a pendulum or the orbit of a comet. But in between these two extremes of simplicity and complexity there is vast domain of patterning that is termed, perhaps rather inappropriately, “chaotic”. Chaotic patterns are both organised and complex. It is this realm that is not easy to mathematicise.

We know of general mathematical schemes that generate chaos (like for example the method of generating the Mandelbrot set), but given any particular chaotic pattern finding a simple generating system is far from easy. Chaotic configurations are too complex for us to easily read out directly from them any simple mathematical scheme that might underlie them. But at the same time chaotic configurations are not complex enough to exhaustively yield to statistical description.

The very simplicity of mathematical objects ensures that they are in relatively short supply. Human mathematics is necessarily a construction kit of relatively few symbolic parts, relations and operations, and therefore relative to the vast domain of possibility, there can’t be many ways of building mathematical constructions. Ergo, this limited world of simple mathematics has no chance of covering the whole domain of possibility. The only way mathematics can deal with the world of general chaos is to either simply store data about it in compressed format or to use algorithmic schemes with very long computation times. Thus it seems that out there, there is a vast domain of pattern and object that cannot be directly or easily treated using statistics or simple analytical mathematics.

And here is the rub. For not only do humans beings naturally inhabit this mathematically intractable world but their behavior is capable of spanning the whole spectrum of complexity – from relatively simple periodic behaviour like worker-a-day routines, to random behaviour that allows operational theorists to make statistical predictions about traffic flow, through all the possibilities in between. This is Super Complexity. When you think you have mathematicised human behaviour it will come up with some anomaly....

To be continued.....

Thursday, May 03, 2007

Mathematical Politics: Part 5

The Big But
Complex system theory, when applied to human beings, can be very successful. It is an interesting fact that many measurable human phenomena, like the size of companies, wealth, Internet links, fame, size of social networks, the scale of wars, etc are distributed according to relatively simple mathematical laws - laws that are qualitatively expressed in quips like “the big get bigger” and “the rich get richer”. It is an interesting fact that the law governing the distribution of, say, the size of social networks has a similar form to the distribution of the size of craters on the moon. It is difficult to credit this given that the objects creating social networks (namely human beings) are far more complex than the simple elements and compounds that have coagulated to produce the meteors that have struck the moon. On the other hand, there is an upper limit to complexity: complexity can not get any more complex than randomness and so once a process like meteor formation is complex enough to generate randomness, human behavior in all its sophistication cannot then exceed this mathematical upper limit.


The first episode of the “The Trap”, screened on BBC2 on 11th March 2007, described the application of games theory to the cold war (a special case of complex system theory). The program took a generally sceptical view (rightly in my opinion) of the rather simplistic notions of human nature employed as the ground assumptions in order that games theory and the like are applicable to humanity. To support this contention the broadcast interviewed John Nash (he of “Beautiful Mind” and “Nash Equilibrium” fame – pictured) who admitted that his contributions to games theory were developed in the heat of a paranoid view of human beings (perhaps influenced by his paranoid schizophrenia). He also affirmed that in his view human beings are more complex than the self-serving conniving agents assumed by these theories.

Like all applications of mathematical theory to real world situations there are assumptions that have to be made to connect that world to the mathematical models. Alas, human behaviour does, from time to time, transcend these models and so in one sense it seems that human beings are more complex than complex. But how can this be?

To be continued....

Wednesday, April 18, 2007

Mathematical Politics: Part 4

The Robustness of Complex System TheoryMost people, when checking out of a supermarket, will select a queue they perceive to require the least amount of waiting time. The result is that the queues in a supermarket all stay roughly the same length. People naturally distribute themselves equitably over the available queues, perhaps even taking into account the size of the shopping loads of those people queuing. Thus, the load balancing of supermarket queues doesn’t need a manager directing people to the queues: the decisions can be left to the shoppers. Because this decentralised method of load balancing is using the minds of many shoppers, where each shopper is likely to be highly motivated to get out of the shop quickly, it is probably superior to the single and the perhaps less motivated mind of someone specially employed as queue manager. Supermarket queuing is just one example of order - in this case an ordered load balancing system - emerging out of the behaviour of populations of autonomous but interacting components.

It is this kind of scenario that typifies the application of complex systems theory. When it is applied to human societies the assumption is that people are good at looking after themselves both in terms of their motivation and having the best knowledge of the situation on the ground. The stress is on the responsibility of the individual agents to make the right local decisions serving themselves. In looking after their own affairs they, inadvertently, serve the whole. In short the economy looks after itself. This is the kernel of Adam Smith's argument in “Wealth of Nations”.

So, the argument goes, for the successful creation and distribution of wealth the centralised planning of a command economy is likely to be less efficient a decision making process than that afforded by the immense decisional power latent in populations of people who are competent in identifying and acting own their own needs and desires. In particular, technological innovation is very much bound up with the entrepreneurial spirit that amalgamates the skills of marketers and innovators who spot profit opportunities that can be exploited by new technology. Hence, free market capitalism goes hand in hand with progress. Such activity seems well beyond the power of some unimaginative central planner. It has to be admitted that there is robustness in this argument; Centralised planners don’t have the motivation, the knowledge and the processing power of the immense distributed intelligence found in populations of freely choosing agents.

But there is always a but.....

To be continued....

Wednesday, April 11, 2007

Mathematical Politics: Part 3

The Rise of Postmodernism: In a scenario that itself could serve as a complex systems case study, the political perturbations of the eighties is beset with a chaotic cascade of ironies. As Thatcher and Reagan made it their business to dismantle the power of central government in favour of a decentralized market of economic decision makers, their anti-interventionist policies were readily portrayed as the path to true freedom. In contrast the traditional left-wingers, as advocates of an economy planned by a strong central government, opened themselves up to being accused of aiming to meddle in people’s affairs and thereby curtailing their freedoms. Moreover, the left, which so often identified itself as the friend of the benign self regulating systems of the natural ecology, never let on that the natural world had an isomorphism with the self regulating mechanisms of the free-market. The left might rail against big business as it polluted and disturbed a natural world that functioned best without intervention and yet the left had no qualms about disturbing the free market with their planned economy. But the radical right also presented us with a paradox. If they were to push through their free-market vision they had to use strong central government in order to do so. Thus, like all radical governments since the French revolution who believe their subjects were not be free to chose their freedom, the radical right faced the logical conundrum encapsulated in the phrase “The tyranny of freedom”. Thus, as is the wont of those who think they should be in power but aren’t, it was easy for the left to cast the radical right as the true despots.

So, who, then, was for freedom and who wasn’t? The left or the right? Both parties had marshaled some of the best intellects the world has seen and yet they seem to have lead us into an intellectual morass. Belief in man’s ability to make sense of his situation was at low ebb and against a backdrop of malaise and disaffection, it is not surprising that there should arise a widespread distrust of anyone who claimed to really know universal truths whether from the left or right. The Postmodernists believed they had the answer to who was for freedom and who wasn’t, or perhaps I should say they didn’t have the answer, because Postmodernism is sceptical of the claims of all ‘grand narratives’ like Marxism or complex systems theory to provide overarching explanations and prescriptions for the human predicament. Postmodernism consciously rejected the ‘grand narratives’ of left and right as not only intellectual hubris but hegemonic traps tempting those believing in these narratives to foist them upon others, by coercion if needs be. The grand-narratives that both parties held and which they promulgated with evangelical zeal lead them to infringe the rights of the individual and engage in a kind of conceptual imperialism. Those of an anti-establishment sentiment, who in times past found natural expression and hope for liberation in Marxism, no longer feel they can identify with any grand philosophy and instead have found their home in the little narratives of postmodernism, where contradiction, incoherence, and fragmentation in one’s logic are not merely accepted but applauded as just rebellion against the intellectual tyranny of the know all grand theorists.

But irony is piled on irony; Postmodernism, as the last bastion of the anti-establishment is in one sense the ultimate decentralisation, the ultimate laissez faire, the ultimate breakdown into individualism. One is not only free to do what one fancies but is also free to believe what one fancies. The shared values, vision and goals of civic life are replaced with conceptual anarchy. With the failure of Marx’s grand narrative to make sense of social reality, those of an anti-establish sentiment no longer have a philosophy to pin their hopes on, but instead the anti-establishment have unintentionally thrown their lot in with the radical right; They are carrying out the ultimate live experiment with a system of distributed living decision makers. According to complex system theory either some kind of organised equilibrium or chaotic fluctuation will ensue, whether the postmodernists believe it or not. You can’t escape the grand-narratives of mathematics, although you might like to think you can.

To be continued....

Thursday, April 05, 2007

Mathematical Politics: Part 2

Marxism on the Run: At about the time free market economics was in the ascendancy under Margaret Thatcher and Ronald Reagan I was involved in a study of Marxism, in the course of which I even attended some Marxist meetings. For me the two political philosophies were thrown into sharp contrast, but the radical right, with their allusions to mathematical systems theory, were beginning to show up Marxism for what it was: a Victorian theory of society that was now looking rather antiquated, or at the very least in need of a conceptual makeover. If Marxism failed to enhance itself with modern insights taken from systems theory it would become obsolete. And obsolete it was fast becoming; The Marxists I met were entrenched in nineteenth century ideas and they weren’t going to update them. For example, the radical right’s excursion into systems theory was debunked by these Marxists as just another piece of intellectual sophistry devised by the intellectuals of the propertied classes with aim of befuddling us workers and obscuring the reality of class conflict. It was clear that this old style Marxism was not going to make any serious attempt to engage these new ideas.

Another serious failing of Marxism, and another sign of its nineteenth century origins, is that its theory human nature has not advanced much beyond Rosseau’s naive concept of the noble savage. In fact one Marxist I spoke to on this subject suggested that the nature of human nature is irrelevant and he simply reiterated that well-known Marxist cliché about “economic realities being primary”. He was still working with the 17th century Lockian view of Human nature; To him, humans were the ‘blank slate’ that Steve Pinker has so eloquently argued against. All that mattered was getting the economic environment right and to hell with all these ideas about the neural substance on which human nature is founded and its origins in the recipes of genetics.

And ‘hell’ is not such an inappropriate term even for an atheistic philosophy like Marxism. I am not the most enthusiastic advocate of laissez faire capitalism but it seemed to me that Marxist theory was going to seed, one sign of this being that the Marxists I met were dismissing any robust challenges by assuming from the outset that they were cynical attacks by the middle classes. Basically their response was little different from “The Satan argument” used by some Christains to protect their faith. The Satan argument posits a win-win situation from the outset and it works like this; If a challenge is made to the faith that can not be easily countered then that challenge must come from Satan (= the middle class) and therefore should be ignored. It is impossible to overcome this kind of conceptual defense, because the more successful the challenge the more strongly it will be identified as “Satanic” (or bourgeois).

When Soviet Russia collapsed at the beginning of the nineties it seemed that Marxism was a spent force. Cult Marxism still lingered, of course, but a new generation of anti-establishment idealists who needed a philosophy they could call their own were left as intellectual orphans. Where would they find a home?

To be continued...

Saturday, March 31, 2007

Mathematical Politics: Part 1

Complex Systems Theory: I recently watched the three episodes of “The Trap”, a documentary screened on BBC2 on Sunday night. The program studied the development of western political policy from the 1950s to date. As a rule I am not greatly interested in politics and I probably glanced at the entry in the Radio Times for the first episode and dismissed it. However, just by chance I happened to turn on the TV at the start of the first program and I immediately found myself lapping it up. The first program wasn’t an expose of the gossipy particulars and intrigues of political life, but told of the application of games theory to the cold-war stalemate. This really interested me because here was a program dealing with fundamental principles and not particulars. I have a nodding acquaintance with games theory, but as I have never really closely attended to politics I didn’t know, as the program alleged, that games theory had been so seriously applied during the cold war decades in order to deliberately create a stalemate that circumvented nuclear war. Although this passed me by at the time I was aware of another trend in politics that was alluded to in this first episode of ‘The Trap’; that is, the radical right’s application of complex systems theory to socio-economics during the nineteen eighties.


Complex systems theory is not a single theory as such but an interdisciplinary largely mathematical subject combining theoretical insights taken from a variety of disciplines, from physics, through information theory, to computational theory. It is of great generality having many applications, and encompasses games theory as a special case. It is a theory that deals with systems of relatively simple interacting parts, where each part obeys some basic rules determining just how it interacts with other parts of the system. What piques the interest of complex systems theorists is that so often these systems of interacting parts show “self organizing” behavior; that is, the parts of these complex systems organize themselves into highly ordered forms. Take for example the spectacle of synchronized flying displayed by a large flock of birds. This behavior can be simulated with computer models using the fundamental insight of systems theory – namely, that the complex organized behavior of flocks of birds emerges as result of some basic rules determining how each individual of the flock responds to its neighbors. The crucial observation is that to produce this self-organizing effect no central control is needed - just simple rules telling each part how to look after itself. In the cold war period the “players” in the nuclear deterrence game looked after themselves by responding to the threat that each posed on the other, and the result, it was inferred, would be a stalemate; or in terms of the mathematical speak of games theory “A Nash equilibrium”. All-out nuclear war was thereby avoided. Most people have an intuitive grasp of how mutual deterrence is supposed to prevent either side making an aggressive move, but games theory supports this intuitive notion with mathematical rigor. The self-organized outcome of the cold war game was, the theory suggests, a peaceful, if rather tense, coexistence. That was the theory anyway.

Complex systems theory in its general form made its presence felt in politics during the eighties with the swing toward free market economics under Margaret Thatcher and Ronald Reagan. In 1987 I watched a documentary in which the ‘radical right’, as they were called, stated their case. It was clear to me even then that the radical right really had got their intellectual act together. They bandied about terms such as ‘distributed processors’, ‘local information’, ‘self regulation’, ‘self organizing systems’ and the like – all things that are very familiar to a complex system theorist. According to the radical right, central government should refrain from interfering with the natural processes of the free market, processes that solve the problems of wealth creation and distribution in ways analogous to decentralized natural systems like the ant’s nest, the brain and the hypothetical Gaia. In these natural systems there is no central control; the ‘intelligence’ of the system is distributed over many relatively simple parts and these parts behave and interact with one another using some basic rules. Likewise, society, it is conjectured, can be modeled after the fashion of these decentralized systems. Government intervention, according to the radical right, is likely to disrupt the natural self-regulating mechanisms of the free market. In fact no central planner could ever have enough information or even the cognitive where-with-all to do what the market’s many decentralized processors do. The notions behind Adam Smith’s ‘Wealth of Nations’, so hated of Marx, were now seen as a special case of complex systems theory. Smith’s vision of a system of autonomous wealth producers making local decisions based on their surroundings thereby generating an economic order echoed the self-organizing properties of many natural systems. Intelligence, rationality and order “emerges” out of these distributed systems, and that, it is contended, also holds for the free market.

That’s the theory anyway.

To be continued...