Monday, December 31, 2007

Fireside Problem

Over the Christmas period I have been grappling with a problem that has been hanging around with me for some time.
Now, it is possible to generate highly disordered sequences of 1s and 0s using an algorithm. An algorithm is also expressible as a sequence of 1s and 0s. However, whereas disordered sequences may be long – say of length Lr, the length of the algorithm sequence generating it, Lp, may be short. That is Lr is less than Lp. The number of possible algorithm sequences will be 2^Lr. The number of possible disordered sequences will be nearly equal to 2^Lp. So, because 2^Lp is much less than 2^Lr, then it is clear that short algorithms can only generate a very limited number of all the possible disordered sequences. For me, this result has a counter intuitive consequence: It suggests that there is a small class of disordered sequences that have a special status - namely the class of disordered sequences that can be generated algorithmically. But why should a particular class of disordered sequences be so mathematically favoured when in one sense every disordered sequence is like every other disordered sequence in that they all have the same statistical profile? My intuitions suggest that all disordered sequences are in some sense mathematically equal, and yet it seems that algorithms confer a special status on a small class of these sequences.
I think I now know where the answer to this intuitive contradiction lies. To answer it we have to go back to the network view of algorithmic change. If we take a computer like a Turing machine then it seems that it wires the class of all binary sequences into a network in a particular way, and it is the bias introduced by the network wiring that leads to certain disordered configurations being apparently favored. It is possible to wire the network together in other ways that would favour another class of disordered sequences. In short the determining variable isn’t the algorithm, but the network wring, which is a function of the computing model being used. It is the computing model inherent in the network wiring that has as much degree of freedom as does a sequence as long as Lr. Thus, in as much as any disordered sequence can have an algorithmically favoured position depending on the network wiring used by the computing model, then in that sense no disordered sequence is absolutely favoured over any other.

Well, that’ll have to do for now; I suppose I had better get back to the Intelligent Design Contention.

Wednesday, December 12, 2007

The Intelligent Design Contention: Part 1

The Contention
For some months now I have been reading Sandwalk, the blog of atheist Larry Moran, or as he dubs himself ‘A Skeptical Biochemist’: (He can’t be that skeptical otherwise he would apply his skepticism to atheism and graduate from ‘skeptic’ to ‘doubter’.) One reason for going to his blog was to help me get a handle on the evolution versus Intelligent Design (ID) debate. The ID case hinges on a central issue that I describe in this post.

The ID contention is, in the abstract, this: If we take any complex organized system consisting of a set of mutually harmonized parts that by virtue of that harmony forms a stable system, it seems (and I stress seems) that any small changes in the system severely comprises the stability of that system. If these small changes lead to a break down in stability how could the system have evolved given that evolution is a process requiring that such systems be arrived at by a series of incremental changes?

Complex organized systems of mutually harmonized components are termed ‘Irreducibly Complex’ by ID theorists. The term ‘irreducible’ in this context refers, I assume, to the fact that apparently any attempt to make the system incrementally simpler by say removing or changing a component, results in severe malfunction and in turn this jeopardizes the stability of the system. If the apparent import of this is followed through then it follows that there are no ‘stable exit routes’ by which the system can be simplified without compromising stability. If there are no ‘stable exit routes’ then there are no ‘stable entry routes’ by which an evolutionary process can ‘enter’.

Mathematically expressed:

Stable incremental routes out = stable incremental routes in = ZERO.

In the ID view many biological structures stand in unreachable isolation, surrounded by a barrier of evolutionary 'non-computability'. Having believed they have got to this point ID theorists are then inclined to make a three fold leap in their reasoning: 1) They reason that at least some aspects of complex stable systems of mutually harmonized parts have to be contrived all of piece; that is, in one fell swoop as a fait accompli. 2) That the only agency capable of creating these designs in such a manner requires intelligence as a ‘given’. 3) That this intelligence is to be identified with God.

I get a bad feeling about all this. Once again I suspect that evangelicalism is unrinating up the wrong lamp post. Although the spiritual attitudes of the ID theorists look a lot better than some of the redneck Young Earth Creationists I still feel very uneasy. So much that one is supposed to accept under the umbrella of evangelicalism is often administered with subtle and sometimes not so subtle hints that one is engaged in spiritual compromise if one doesn’t accept what is being offered. I hope that at least ID theory will not become bound up with those who apply spiritual duress to doubters.

Wednesday, November 14, 2007

Generating Randomness

I have been struggling to relate algorithmics to the notion of disorder. In my last post on this subject I suggested that an algorithm that cuts a highly ordered path through a network of incremental configuration changes in order to eventually generate randomness would require a ‘very large’ number of steps in order to get through the network. This result still stands, and it probably applies to a simple algorithm like the counting algorithm, which can be modeled using simple recursive searching of a binary tree network.

But, and here’s the big but, a simple counting algorithm does not respond to the information in the current results string (the results string is the equivalent of the tape in a Turing machine). When an algorithm starts receiving feedback from its ‘tape’, then things change considerably – we then move into the realm of non-linearity.

Given the ‘feedback’ effect and using some assumptions about just what a Turing machine could at best achieve I have obtained a crude formula giving an upper limit on the speed with which a Turing machine could development a random sequence. A random sequence would require at least ‘t’ steps to develop where:

t ~ kDn/log(m)
and where ‘k’ is a constant whose factors I am still pondering, ‘D’ is the length of the sequence, ‘n’ is an arbitrarily chosen segment length whose value depends on the quality of randomness required, and ‘m’ is the number of Turing machine states. This formula is very rough, but it makes use of a frequency profile notion of randomness I have developed elsewhere. The thing to note is that the non-linearity of algorithmics (a consequence of using conditionals) means that the high complexity of randomness could conceivably be developed in linear time. This result was a bit of a surprise as I have never been able to disprove the notion that the high complexity of randomness requires exponential time to develop it – a notion not supported by the speed with which ‘random’ sequences can be generated in computers. Paradoxically, the high complexity of randomness (measured in terms of its frequency profile) may not be computationally complex. On the other hand it may be that what truly makes randomness computationally complex is that for high a quality randomness D and n are required to become ‘exponentially’ large in order to obtain a random sequence with a ‘good’ frequency profile.

Thursday, September 27, 2007

The Virtual World

Why don’t I accept the multiverse theory, I have to ask myself. One reason is that I interpret quantum envelopes to be what they appear to be: namely, as objects very closely analogous to the guassian envelopes of random walk. Guassian random walk envelopes are not naturally interpreted as a product of an ensemble of bifurcating and multiplying particles (although this is, admittedly, a possible interpretation) but rather a measure of information about a single particle. ‘Collapses’ of the guassian envelope are brought about by changes in information on the whereabouts of the particle. I see parallels between this notion of ‘collapse’ and quantum wave collapse. However, I don’t accept the Copenhagen view that sudden jumps in the “state vector” are conditioned by the presence of a conscious observer. My guess is that the presence of matter, whether in the form of a human observer or other material configurations (such as a measuring device) are capable of bringing about these discontinuous jumps. (Note to self: The probability of a switch from state w) to state v) is given by (wv) and this expression looks suspiciously like an ‘intersection’ probability.)

The foregoing also explains why I’m loath to accept the decoherence theory of measurement: this is another theory which dispenses with literal collapses because it suggests that they are only an apparent phenomenon in as much as they are an artifact of the complex wave interactions of macroscopic matter. Once again this seems to me to ignore the big hint provided by the parallels with random walk. The latter lead me to view wave function ‘collapse’ as something closely analogous to the changes of information which take place when one locates an object; the envelopes of random walk can change discontinuously in a way that is not subject to the physical strictures on the speed of light and likewise for quantum envelopes. My guess is that the ontology of the universe is not one of literal particles, but is rather an informational facade about virtual particles; those virtual particles can’t exceed the speed of light, but changes in the informational envelopes in which the virtual particles are embedded are not subject to limitations on the speed of light.

Monday, September 24, 2007

Well Versed

I have been delving into David Deutsch’s work on Parallel universes. Go here, to get some insight into Deutsch’s recent papers on the subject. I’m not familiar enough with the formalism of quantum computation to be able to follow Deutsch’s papers without a lot more study. However, some salient points arise. In this paper dated April 2001 and entitled “The Structure of the Multiverse” Deutsch says:
“.. the Hilbert space structure of quantum states provides an infinity of ways of slicing up the multiverse into ‘universes’ each way corresponding to a choice of basis. This is reminiscent of the infinity of ways in which one can slice (‘foliate’) a spacetime into spacelike hypersurfaces in the general theory of relativity. Given such a foliation, the theory partitions physical quantities into those ‘within’ each of the hypersurfaces and those that relate hypersurfaces to each other. In this paper I shall sketch a somewhat analogous theory for a model of the multiverse”
That is, as far as I understand it, Deutsch is following the procedure I mentioned in my last blog entry – he envisages the relationships of the multiple universes similar to the way in which we envisage the relationships of past, present and future.

On the subject of non-locality Deutsch in
this paper on quantum entanglement, states:
“All information in quantum systems is, notwithstanding Bell’s theorem, localized. Measuring or otherwise interacting with a quantum system S has no effect on distant systems from which S is dynamically isolated, even if they are entangled with S. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that under some circumstances (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) quantum information is transmitted through ‘classical’ (i.e. decoherent) channels.”
Deutsch is attacking the non-local interpretation of certain quantum experiments. In
this paper David Wallace defends Deutsch and indicates the controversy surrounding Deutsch’s position and its dependence on the multiverse contention. In the abstract we read:

It is argued that Deutsch’s proof must be understood in the explicit context of the Everett interpretation, and that in this context it essentially succeeds. Some comments are made about the criticism of Deutsch’s proof by Barnum, Caves, Finkelstein, Fuchs and Schack; it is argued that the flaw they point out in the proof does not apply if the Everett interpretation is assumed. "

And Wallace goes on to say:

“…it is rather surprising how little attention his (Deutsch’s) work has received in the foundational community, though one reason may be that it is very unclear from his paper that the Everett interpretation is assumed from the start. If it is tacitly assumed that his work refers instead to some orthodox collapse theory, then it is easy to see that the proof is suspect… Their attack on Deutsch’s paper seems to have been influential in the community; however, it is at best questionable whether or not it is valid when Everettian assumptions are made explicit.”

The Everett interpretation equates to the multiverse view of quantum mechanics. Deutsch’s interpretation of QM is contentious. It seems that theorists are between a rock and a hard place: on the one hand is non-locality and absolute randomness and on the other is an extravagant ontology of a universe bifurcating everywhere and at all times. It is perhaps NOT surprising that Deutsch’s paper received little attention. Theoretical Physics is starting to give theorists that “stick in the gullet” feel and that’s even without mentioning String Theory!

Saturday, September 22, 2007

Quantum Physics: End of Story?

News has just reached me via that auspicious source of scientific information, Norwich’s Eastern Daily Press (20 September) of a mathematical break through in quantum physics at Oxford University. Described as “one of the most important developments in the history of science” my assessment of the report is that multiverse theory has been used to derive and/or explain quantum physics.

The are two things that have bugged scientists about Quantum Physics since it was developed in the first half of the twentieth century; firstly its indeterminism – it seemed to introduce an absolute randomness in physics that upset the classical mentality of many physicists including Einstein: “God doesn’t play dice with the universe”. The second problem which, in fact is related to this indeterminism, is that Quantum Theory suggests that when these apparently probabilistic events do occur distant parts of the universe hosting the envelop of probability for these events, must instantaneously cooperate by giving up their envelope. This apparent instantaneous communication between distant parts of the cosmos demanding faster than light signaling also worried Einstein and other physicists.

Multiverse theory holds out the promise of reestablishing a classical physical regime of local and deterministic physics, although at the cost of positing the rather exotic idea of universes parallel to our own. It achieves this reinstatement, I guess, by a device we are, in fact, all familiar with. If we select, isolate and examine a particular instant in time in our own world we effectively cut it of from its past (and future). Cut adrift from the past much about that instant fails to make sense and throws up two conundrums analogous to the quantum enigmas I mentioned above; Firstly there will be random patterns like the distribution of stars which just seem to be there, when in fact an historical understanding of star movement under gravity gives some insight into that distribution. Secondly, widely separated items, will seem inexplicably related – like for example two books that have identical content. By adding the time dimension to our arbitrary time slice the otherwise inexplicable starts to make sense. My guess is that by adding the extra dimensions of the multiverse a similar explanatory contextualisation has finally – and presumably tentatively - been achieved with the latest multiverse theory.

Not surprisingly the latest discovery looks as though it has come out of the David Deutsch stable. He has always been a great advocate of the multiverse. By eliminating absolute randomness and non-locality multiverse theory has the potential to close the system and tie up all the lose ends. Needless to say all this is likely to proceed against a background of ulterior motivations and may well be hotly contended, not least the contention that Deutsch has made the greatest discovery of all time!
1. The tying of all loose ends is only apparent; all finite human knowledge can only flower out of an irreducible kernel of fact.
2. Multiverse theory, unlike the Copenhagen interpretation of Quantum Mechanics, suggests that quantum envelopes do not collapse at all, but always remain available for interference. Hence it should in principle be possible to detect the difference between these two versions of Quantum Theory experimentally.

Thursday, August 30, 2007

Pilgrim's Project Progress on Path Strings

Here is the latest on my attempts to apply the notion of disorder to algorithms.

In 1987 I compiled a private paper on the subject of disorder. I defined randomness using its pattern frequency profile and related it to its overwhelming statistical weight, showing that the frequency profile of randomness has, not unsurprisingly, far more particular ways in which it can be contrived than any other kind of frequency profile. I never thought of this work as particularly groundbreaking but it really represents a personal endeavour to understand an important aspect of the world, and ever since this notion of randomness has been an anchor point in my thinking. For example, for a long while I have worked on a project that has reversed the modern treatment of randomness – rather than throw light on randomness using algorithmic information theory I have attempted to investigate algorithmics using this frequency profile notion of randomness.

I have recently made some progress on this project, a project that constitutes project number 2 on my project list. I’m still very much at the hand waving stage, but this is how things look at the moment:

Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.

Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.

Now, if we started with a result string of Lr zeros we could convert it into a random sequence by traversing the network with a path string of minimum length in the order of Lp, where clearly Lp ~ Lr. This is the shortest possible path length for creating a random sequence. However, to achieve this result so quickly would require the path string itself to be random. But what if for some reason we could only traverse very ordered paths? How long would an ordered path string be for it to produce a complex disordered result? This is where it gets interesting because it looks as though the more ordered the path string is, the longer it has to be to generate randomness. In fact very ordered paths have to be very long indeed, perhaps exponentially long. i.e. Lp >>> Lr for ordered paths and disordered results. This seems to be my first useful conclusion. I am currently trying to find a firmer connection between the path strings and algorithms themselves.

Sunday, July 15, 2007

Mathematical Politics: Part 10

The Irrational Faith in Emergence
Marx did at least get one thing right; he understood the capacity for a laissez faire society to exploit and for the poor to fall by the wayside, unthought of and neglected. In the laissez faire society individuals attempt to solve their immediate problems of wealth creation and distribution with little regard for the overall effect. Adam Smith’s conjecture was that as people serve themselves, optimal wealth creation and distribution will come out of the mathematical wash. But is this generally true? The laissez faire system is, after all, just that; a system, a system with no compassion and without heart. As each part in that systems looks to its own affairs no one is checking to see if some people are falling into poverty. And they do - in fact one expects it from complex systems theory itself. Like all systems the absolutely laissez faire economy is subject to chaotic fluctuations – stock market crashes, and swings in inflation and employment. These fluctuations are exacerbated by the constant perturbations of myriad factors, especially the moving goal posts of technological innovation. Moreover, the “rich get richer” effect is also another well-known effect one finds in complex systems - it is a consequence of some very general mathematics predicting an inequitable power law distribution of wealth.

In short, economies need governing; that is they need a governmental referee to look out for fouls, exploitation and the inequalities that laissez faire so easily generates. However this is not a call for half-baked notions of government advocated by Marxists. Marx may have got the diagnosis at least partly right, but his medicine was poison. Slogans urging the workers to take control of the means of production may have a pre-revolutionary “the grass in greener on the other side” appeal, but slogans aren’t sufficient to build complex democratic government. Marx hoped that somehow, after the overthrow of the owning classes, the details of implementing “worker control” would sort themselves out. This hope was based on the supposition that a post-revolutionary society would consist of one class only, the working class, and since it is the clash of class interests the are at the root of conflict, a post-revolutionary society would, as matter of course, truly serve worker interests. Ironically, the assumption that humans are capable of both rationally perceiving and serving their interests is at the heart of Marxist theory as much as it is at the heart of laissez faire capitalism.

In Marx the details of the post-revolutionary society are sparse. In my mixing with Marxists I would often hear of half-baked ideas about some initial post revolutionary government that would act as a forum served by representatives from local worker soviets. Because the post-revolutionary society is supposed to be a “one class society” it is concluded that there will be no conflicts of interest and therefore only a one party government will be needed to represent the interests of a single class. This government, the “dictatorship of the proletariat”, would hold all the cards of power: the media, the means of production, the police and the army. But with power concentrated in one governing party, conditions would be ripe for two classes. The bureaucracy of the one party command economy would, of course, become populated by a Marxist elite who would not only relate to the means of production in an entirely different way to the masses, but would also be strategically placed to abuse their power. Human nature, a subject about which Marx had so little say, would tempt the Marxist elite to exploit the potential power abuses of a one the party context. Marxists sometimes claim that there is no need for police and army in a one party paradise, but for the one-party governing class there is even more need for army and police in order to protect their exclusive hold on power. Moreover, the local soviets would provide the means of infiltrating and controlling the working classes via system of informers and intimidators who would no doubt masquerade as the representatives the working class.

It is ironic that both laissez faire capitalists and Marxists have faith in the power of a kind of “emergence” to work its magic. Both believe that once certain antecedent conditions are realized we are then on the road to a quasi-social paradisr. For the laissez faire capitalist the essential precursor is a free economy. For the Marxist the overthrow of the owning classes is the required precursor that once achieved will allow all else to fall into place. There is a parallel here with the school of artificial intelligence that believes consciousness is just a matter of getting the formal structures of cognition right: Once you do this, it is claimed, regardless of the medium on which those formal structures are reified, conscious cognition will just “emerge”. Get the precursors right and the rest will just happen, and you needn’t even think about it; the thing you are looking for will just ‘emerge’.


I have had enough of this Mathematical Politics business for a bit, so I think I will leave it there for the moment.

Sunday, June 24, 2007

Mathematical Politics: Part 9

Games Theory Breaks DownMuch of the theory behind the 1980s free market revolution depended on the notion of human beings competently making ‘rational’ and ‘selfish’ decisions in favour of their own socio-economic well being. But there is rationality and rationality and there is selfishness and selfishness. It is clear that human beings are not just motivated by the desires guiding Adam Smith’s invisible hand. If the cognizance of politics fails on this point then those other motivations, some of them like the forbidden planet’s monsters from the id, will one day pounce from the shadows and surprise us. And the consequences can be very grave indeed.

The story of the Waco siege gives us insight into some of the more perverse perceptions and motives lurking in the human psyche, which if summoned have priority over the need to maximize one’s socio-economic status or even the need to self preserve. The perverse altruism of the Waco cult members was neither accounted for nor understood by the authorities dealing with the siege. No doubt the latter, with all good intensions, wanted to end the siege without bloodshed, but they appeared to be using the cold war model. They tried very hard to offer incentives to get the cult members to defect, using both the carrot (safety) and the stick (cutting off supplies and making life generally uncomfortable with psychological pressures). All to no avail. Even when the senses of the cult members were assaulted with tear gas and their lives threatened by fire they did not budge from the cult’s compound. The authorities reckoned without the cult’s perverse loyalty to David Koresh who by this time was claiming Son of God status, and was fathering ‘Children of God’ through the cult’s women. That’s nothing new either: Akhenaten, 3500 years before Koresh, thought and did the same, as have so many other deluded religious leaders. The irony of it! The socio-biologists have a field day on this point!

If anything the measures taken by the authorities stiffened the resolve of the cult members who saw in these attempts the very thing Koresh warned them of. Koresh had succeeded in reinventing, once again, that well known cognitive virus that ensures that its hosts interpret any attempt to get rid of it as a sure sign of the presence of Satan. Hence, all attempts to dislodge the virus have the very opposite effect and simply embed it more deeply in the psyche. Thus, like the efforts to wriggle free from a knotted tangle or the struggles of a prey to escape the backward pointing teeth of a predator, the bid from freedom by the most direct path only helps to consolidate the entrapment. The only way to untie a difficult knot is to slowly and patiently unpick it bit by bit.

Cold war games theory failed at Waco on at least two counts: Firstly the rationality of cult members was based on a false view of reality – they saw the authorities, a-priori, as the agent of Satan and Koresh as God’s savoir. Secondly the selfishness of the cult members was not that of looking after themselves. The greater selfishness was embodied in a loyalty to Koresh and his viral teaching. In the face of this the incentives and disincentives offered by the authorities were worse than useless. It never occurred to them that here were a group of people that were prepared to go for a “Darwin award”. (

As our society faces other issues where religion looms large, such as Iraq and Islamic terrorism, we may find that Smith’s invisible hand provides no solution.
To be continued....

Wednesday, June 13, 2007

Mathematical Politics: Part 8

Complex Adaptive Systems
The Santa Fe institute is an affiliation of largely male academics seeking to the spread the theoretical net as widely as possible - especially into the domain of human society and complex systems in general. The theoretical holy grail is to write equations encompassing all that goes on under the sun and thus arrive at a preordained order which once captured means that theoreticians can retire declaring their work to be done. The universe is then a museum piece embodied in a few equations we can muse over, knowing that if we crank their handles they will churn out the answers. They will thereby encode all secrets and mysteries, thus making them no longer secrets and mysteries.

Thank God that’s not true. Even granted that physics contains catchall equations, those equations are very general and not specific. Moreover, our current physics, with its appeal to the absolute randomness of quantum fluctuation suggests that endless novelty is encoded into physical processes. As Sir John Polkinghorne points out in his book “Exploring Reality” the chaotic tumbling of the asteroid Hyperion is maintained by the underlying perturbations of quantum fluctuations. The motion of Hyperion is forever novel.
John Holland is and was a pioneer in the field of genetic algorithms. His work (amongst that of others) has revealed a close connection between learning systems and evolution. Both processes ring the changes and lock in successful dynamic structures when they find them. These structures select themselves because they have the adaptable qualities needed for self-maintenance in the face of the buffetings of a world in restless change. In evolution those structures are phenotypes adapted to their ecological niche; in learning systems they are algorithms encoding successful models of the world thereby allowing their host organism to anticipate aspects of that world.

In his book “Complexity”, Mitchell Waldrop tells of John Holland’s lecture to the Santa Fe institute. He describes how the theoreticians listening to Holland’s lecture were gob smacked as Holland delivered a home truth – there aren’t any final equations (apart, perhaps from some very general physical constraints) because reality is exploring the huge space of possibility and is therefore delivering endless novelty. That endless novelty can’t be captured in a specific way in some catchall theory. In fact there is really only one thing that can cope with it – learning systems like human beings – or as Holland calls them “complex adaptive systems”. These are systems that are themselves so complex that they have the potential to generate an endless novelty, a novelty that matches or perhaps even exceeds their surroundings. Thus, these systems are either capable of anticipating environmental novelty or else at least able learn from it when it crops up. There are no systems of equations capturing everything there is to know about complex adaptive systems or the environments they are matched to cope with. Therefore it follows that apart from God Himself there is only one system with a chance of understanding something as complex as human beings with all their chaotic foibles, and that is another human being.

To be continued.....

Thursday, May 24, 2007

Mathematical Politics: Part 7

Expecting the Unexpected.
The “super complexity” of human beings means that they are capable of throwing up unexpected “anomalies” and by “anomalies” I don’t mean phenomena that are somehow absolutely strange, but only something not covered by our theoretical constructions. Just when you think you have trapped human behavior in an equation, out pops something not accounted for. These anomalies strike unexpectedly and expose the limits of one’s analytical imagination. They can neither be treated statistically, because they are too few of them, or analytically because the underlying matrix from which they are sourced defies simple analytical treatment.

Take the example I have already given of the supermarket check out system. This system can, for most of the time, be treated successfully using a combination of statistics, queuing theory and the assumption that shoppers are “rational and selfish” enough will look after the load-balancing problem. But there is rationality and rationality. For example, if there is a very popular till operator who spreads useful local gossip or who is simply pleasant company one might find that this operator’s queue starts to lengthen unexpectedly. The simplistic notions of self-serving and ‘rationality’ breaks down. Clearly in such a situation they is a much more subtle rationality being served. What makes it so difficult to account for is that it taps into a social context that goes far beyond what is going on in the supermarket queue. To prevent these wild cards impairing the function of the check out system (such as disproportionately long queues causing blockages) the intervention of some kind of managerial control may, from time to time, be needed.

In short, laissez faire works for some of the people for some of the time, but not for all the people all of the time.

To be continued.....

Sunday, May 13, 2007

Mathematical Politics: Part 6

Mathematical IntractabilityRandomness is a complexity upper limit – size for size nothing can be more complex than a random distribution generated by, say, the tosses of a coin. A sufficiently large random distribution configurationally embeds everything there possibly could be. And yet in spite of this complexity, it is a paradox that at the statistical level randomness is very predictable: for example, the frequency of sixes thrown by a die during a thousand throws can be predicted with high probability. In this sense randomness is as predictable as those relatively simple highly organized physical systems like a pendulum or the orbit of a comet. But in between these two extremes of simplicity and complexity there is vast domain of patterning that is termed, perhaps rather inappropriately, “chaotic”. Chaotic patterns are both organised and complex. It is this realm that is not easy to mathematicise.

We know of general mathematical schemes that generate chaos (like for example the method of generating the Mandelbrot set), but given any particular chaotic pattern finding a simple generating system is far from easy. Chaotic configurations are too complex for us to easily read out directly from them any simple mathematical scheme that might underlie them. But at the same time chaotic configurations are not complex enough to exhaustively yield to statistical description.

The very simplicity of mathematical objects ensures that they are in relatively short supply. Human mathematics is necessarily a construction kit of relatively few symbolic parts, relations and operations, and therefore relative to the vast domain of possibility, there can’t be many ways of building mathematical constructions. Ergo, this limited world of simple mathematics has no chance of covering the whole domain of possibility. The only way mathematics can deal with the world of general chaos is to either simply store data about it in compressed format or to use algorithmic schemes with very long computation times. Thus it seems that out there, there is a vast domain of pattern and object that cannot be directly or easily treated using statistics or simple analytical mathematics.

And here is the rub. For not only do humans beings naturally inhabit this mathematically intractable world but their behavior is capable of spanning the whole spectrum of complexity – from relatively simple periodic behaviour like worker-a-day routines, to random behaviour that allows operational theorists to make statistical predictions about traffic flow, through all the possibilities in between. This is Super Complexity. When you think you have mathematicised human behaviour it will come up with some anomaly....

To be continued.....

Thursday, May 03, 2007

Mathematical Politics: Part 5

The Big But
Complex system theory, when applied to human beings, can be very successful. It is an interesting fact that many measurable human phenomena, like the size of companies, wealth, Internet links, fame, size of social networks, the scale of wars, etc are distributed according to relatively simple mathematical laws - laws that are qualitatively expressed in quips like “the big get bigger” and “the rich get richer”. It is an interesting fact that the law governing the distribution of, say, the size of social networks has a similar form to the distribution of the size of craters on the moon. It is difficult to credit this given that the objects creating social networks (namely human beings) are far more complex than the simple elements and compounds that have coagulated to produce the meteors that have struck the moon. On the other hand, there is an upper limit to complexity: complexity can not get any more complex than randomness and so once a process like meteor formation is complex enough to generate randomness, human behavior in all its sophistication cannot then exceed this mathematical upper limit.

The first episode of the “The Trap”, screened on BBC2 on 11th March 2007, described the application of games theory to the cold war (a special case of complex system theory). The program took a generally sceptical view (rightly in my opinion) of the rather simplistic notions of human nature employed as the ground assumptions in order that games theory and the like are applicable to humanity. To support this contention the broadcast interviewed John Nash (he of “Beautiful Mind” and “Nash Equilibrium” fame – pictured) who admitted that his contributions to games theory were developed in the heat of a paranoid view of human beings (perhaps influenced by his paranoid schizophrenia). He also affirmed that in his view human beings are more complex than the self-serving conniving agents assumed by these theories.

Like all applications of mathematical theory to real world situations there are assumptions that have to be made to connect that world to the mathematical models. Alas, human behaviour does, from time to time, transcend these models and so in one sense it seems that human beings are more complex than complex. But how can this be?

To be continued....

Wednesday, April 18, 2007

Mathematical Politics: Part 4

The Robustness of Complex System TheoryMost people, when checking out of a supermarket, will select a queue they perceive to require the least amount of waiting time. The result is that the queues in a supermarket all stay roughly the same length. People naturally distribute themselves equitably over the available queues, perhaps even taking into account the size of the shopping loads of those people queuing. Thus, the load balancing of supermarket queues doesn’t need a manager directing people to the queues: the decisions can be left to the shoppers. Because this decentralised method of load balancing is using the minds of many shoppers, where each shopper is likely to be highly motivated to get out of the shop quickly, it is probably superior to the single and the perhaps less motivated mind of someone specially employed as queue manager. Supermarket queuing is just one example of order - in this case an ordered load balancing system - emerging out of the behaviour of populations of autonomous but interacting components.

It is this kind of scenario that typifies the application of complex systems theory. When it is applied to human societies the assumption is that people are good at looking after themselves both in terms of their motivation and having the best knowledge of the situation on the ground. The stress is on the responsibility of the individual agents to make the right local decisions serving themselves. In looking after their own affairs they, inadvertently, serve the whole. In short the economy looks after itself. This is the kernel of Adam Smith's argument in “Wealth of Nations”.

So, the argument goes, for the successful creation and distribution of wealth the centralised planning of a command economy is likely to be less efficient a decision making process than that afforded by the immense decisional power latent in populations of people who are competent in identifying and acting own their own needs and desires. In particular, technological innovation is very much bound up with the entrepreneurial spirit that amalgamates the skills of marketers and innovators who spot profit opportunities that can be exploited by new technology. Hence, free market capitalism goes hand in hand with progress. Such activity seems well beyond the power of some unimaginative central planner. It has to be admitted that there is robustness in this argument; Centralised planners don’t have the motivation, the knowledge and the processing power of the immense distributed intelligence found in populations of freely choosing agents.

But there is always a but.....

To be continued....

Wednesday, April 11, 2007

Mathematical Politics: Part 3

The Rise of Postmodernism: In a scenario that itself could serve as a complex systems case study, the political perturbations of the eighties is beset with a chaotic cascade of ironies. As Thatcher and Reagan made it their business to dismantle the power of central government in favour of a decentralized market of economic decision makers, their anti-interventionist policies were readily portrayed as the path to true freedom. In contrast the traditional left-wingers, as advocates of an economy planned by a strong central government, opened themselves up to being accused of aiming to meddle in people’s affairs and thereby curtailing their freedoms. Moreover, the left, which so often identified itself as the friend of the benign self regulating systems of the natural ecology, never let on that the natural world had an isomorphism with the self regulating mechanisms of the free-market. The left might rail against big business as it polluted and disturbed a natural world that functioned best without intervention and yet the left had no qualms about disturbing the free market with their planned economy. But the radical right also presented us with a paradox. If they were to push through their free-market vision they had to use strong central government in order to do so. Thus, like all radical governments since the French revolution who believe their subjects were not be free to chose their freedom, the radical right faced the logical conundrum encapsulated in the phrase “The tyranny of freedom”. Thus, as is the wont of those who think they should be in power but aren’t, it was easy for the left to cast the radical right as the true despots.

So, who, then, was for freedom and who wasn’t? The left or the right? Both parties had marshaled some of the best intellects the world has seen and yet they seem to have lead us into an intellectual morass. Belief in man’s ability to make sense of his situation was at low ebb and against a backdrop of malaise and disaffection, it is not surprising that there should arise a widespread distrust of anyone who claimed to really know universal truths whether from the left or right. The Postmodernists believed they had the answer to who was for freedom and who wasn’t, or perhaps I should say they didn’t have the answer, because Postmodernism is sceptical of the claims of all ‘grand narratives’ like Marxism or complex systems theory to provide overarching explanations and prescriptions for the human predicament. Postmodernism consciously rejected the ‘grand narratives’ of left and right as not only intellectual hubris but hegemonic traps tempting those believing in these narratives to foist them upon others, by coercion if needs be. The grand-narratives that both parties held and which they promulgated with evangelical zeal lead them to infringe the rights of the individual and engage in a kind of conceptual imperialism. Those of an anti-establishment sentiment, who in times past found natural expression and hope for liberation in Marxism, no longer feel they can identify with any grand philosophy and instead have found their home in the little narratives of postmodernism, where contradiction, incoherence, and fragmentation in one’s logic are not merely accepted but applauded as just rebellion against the intellectual tyranny of the know all grand theorists.

But irony is piled on irony; Postmodernism, as the last bastion of the anti-establishment is in one sense the ultimate decentralisation, the ultimate laissez faire, the ultimate breakdown into individualism. One is not only free to do what one fancies but is also free to believe what one fancies. The shared values, vision and goals of civic life are replaced with conceptual anarchy. With the failure of Marx’s grand narrative to make sense of social reality, those of an anti-establish sentiment no longer have a philosophy to pin their hopes on, but instead the anti-establishment have unintentionally thrown their lot in with the radical right; They are carrying out the ultimate live experiment with a system of distributed living decision makers. According to complex system theory either some kind of organised equilibrium or chaotic fluctuation will ensue, whether the postmodernists believe it or not. You can’t escape the grand-narratives of mathematics, although you might like to think you can.

To be continued....

Thursday, April 05, 2007

Mathematical Politics: Part 2

Marxism on the Run: At about the time free market economics was in the ascendancy under Margaret Thatcher and Ronald Reagan I was involved in a study of Marxism, in the course of which I even attended some Marxist meetings. For me the two political philosophies were thrown into sharp contrast, but the radical right, with their allusions to mathematical systems theory, were beginning to show up Marxism for what it was: a Victorian theory of society that was now looking rather antiquated, or at the very least in need of a conceptual makeover. If Marxism failed to enhance itself with modern insights taken from systems theory it would become obsolete. And obsolete it was fast becoming; The Marxists I met were entrenched in nineteenth century ideas and they weren’t going to update them. For example, the radical right’s excursion into systems theory was debunked by these Marxists as just another piece of intellectual sophistry devised by the intellectuals of the propertied classes with aim of befuddling us workers and obscuring the reality of class conflict. It was clear that this old style Marxism was not going to make any serious attempt to engage these new ideas.

Another serious failing of Marxism, and another sign of its nineteenth century origins, is that its theory human nature has not advanced much beyond Rosseau’s naive concept of the noble savage. In fact one Marxist I spoke to on this subject suggested that the nature of human nature is irrelevant and he simply reiterated that well-known Marxist cliché about “economic realities being primary”. He was still working with the 17th century Lockian view of Human nature; To him, humans were the ‘blank slate’ that Steve Pinker has so eloquently argued against. All that mattered was getting the economic environment right and to hell with all these ideas about the neural substance on which human nature is founded and its origins in the recipes of genetics.

And ‘hell’ is not such an inappropriate term even for an atheistic philosophy like Marxism. I am not the most enthusiastic advocate of laissez faire capitalism but it seemed to me that Marxist theory was going to seed, one sign of this being that the Marxists I met were dismissing any robust challenges by assuming from the outset that they were cynical attacks by the middle classes. Basically their response was little different from “The Satan argument” used by some Christains to protect their faith. The Satan argument posits a win-win situation from the outset and it works like this; If a challenge is made to the faith that can not be easily countered then that challenge must come from Satan (= the middle class) and therefore should be ignored. It is impossible to overcome this kind of conceptual defense, because the more successful the challenge the more strongly it will be identified as “Satanic” (or bourgeois).

When Soviet Russia collapsed at the beginning of the nineties it seemed that Marxism was a spent force. Cult Marxism still lingered, of course, but a new generation of anti-establishment idealists who needed a philosophy they could call their own were left as intellectual orphans. Where would they find a home?

To be continued...

Saturday, March 31, 2007

Mathematical Politics: Part 1

Complex Systems Theory: I recently watched the three episodes of “The Trap”, a documentary screened on BBC2 on Sunday night. The program studied the development of western political policy from the 1950s to date. As a rule I am not greatly interested in politics and I probably glanced at the entry in the Radio Times for the first episode and dismissed it. However, just by chance I happened to turn on the TV at the start of the first program and I immediately found myself lapping it up. The first program wasn’t an expose of the gossipy particulars and intrigues of political life, but told of the application of games theory to the cold-war stalemate. This really interested me because here was a program dealing with fundamental principles and not particulars. I have a nodding acquaintance with games theory, but as I have never really closely attended to politics I didn’t know, as the program alleged, that games theory had been so seriously applied during the cold war decades in order to deliberately create a stalemate that circumvented nuclear war. Although this passed me by at the time I was aware of another trend in politics that was alluded to in this first episode of ‘The Trap’; that is, the radical right’s application of complex systems theory to socio-economics during the nineteen eighties.

Complex systems theory is not a single theory as such but an interdisciplinary largely mathematical subject combining theoretical insights taken from a variety of disciplines, from physics, through information theory, to computational theory. It is of great generality having many applications, and encompasses games theory as a special case. It is a theory that deals with systems of relatively simple interacting parts, where each part obeys some basic rules determining just how it interacts with other parts of the system. What piques the interest of complex systems theorists is that so often these systems of interacting parts show “self organizing” behavior; that is, the parts of these complex systems organize themselves into highly ordered forms. Take for example the spectacle of synchronized flying displayed by a large flock of birds. This behavior can be simulated with computer models using the fundamental insight of systems theory – namely, that the complex organized behavior of flocks of birds emerges as result of some basic rules determining how each individual of the flock responds to its neighbors. The crucial observation is that to produce this self-organizing effect no central control is needed - just simple rules telling each part how to look after itself. In the cold war period the “players” in the nuclear deterrence game looked after themselves by responding to the threat that each posed on the other, and the result, it was inferred, would be a stalemate; or in terms of the mathematical speak of games theory “A Nash equilibrium”. All-out nuclear war was thereby avoided. Most people have an intuitive grasp of how mutual deterrence is supposed to prevent either side making an aggressive move, but games theory supports this intuitive notion with mathematical rigor. The self-organized outcome of the cold war game was, the theory suggests, a peaceful, if rather tense, coexistence. That was the theory anyway.

Complex systems theory in its general form made its presence felt in politics during the eighties with the swing toward free market economics under Margaret Thatcher and Ronald Reagan. In 1987 I watched a documentary in which the ‘radical right’, as they were called, stated their case. It was clear to me even then that the radical right really had got their intellectual act together. They bandied about terms such as ‘distributed processors’, ‘local information’, ‘self regulation’, ‘self organizing systems’ and the like – all things that are very familiar to a complex system theorist. According to the radical right, central government should refrain from interfering with the natural processes of the free market, processes that solve the problems of wealth creation and distribution in ways analogous to decentralized natural systems like the ant’s nest, the brain and the hypothetical Gaia. In these natural systems there is no central control; the ‘intelligence’ of the system is distributed over many relatively simple parts and these parts behave and interact with one another using some basic rules. Likewise, society, it is conjectured, can be modeled after the fashion of these decentralized systems. Government intervention, according to the radical right, is likely to disrupt the natural self-regulating mechanisms of the free market. In fact no central planner could ever have enough information or even the cognitive where-with-all to do what the market’s many decentralized processors do. The notions behind Adam Smith’s ‘Wealth of Nations’, so hated of Marx, were now seen as a special case of complex systems theory. Smith’s vision of a system of autonomous wealth producers making local decisions based on their surroundings thereby generating an economic order echoed the self-organizing properties of many natural systems. Intelligence, rationality and order “emerges” out of these distributed systems, and that, it is contended, also holds for the free market.

That’s the theory anyway.

To be continued...

Monday, February 12, 2007

Time Travel

It is sometimes said amongst physicists that we should take the predictions of our theories seriously, even when those theories predict seemingly unlikely results. For example, the wave theory of light predicted that a circular obstruction placed in front of a source of light casts a circular shadow, with a small bright spot of light at the center of the shadow. This bright spot seemed an unlikely prediction and was taken at first to indicate that the wave theory of light was false. However it wasn’t long before “Fresnel’s bright spot”, as it was called, was observed experimentally. Another well known example of an unlikely prediction was Einstein’s discovery that his gravitational field equation, when applied to the cosmos, did not admit static solutions; that is, it only allowed an either contracting or expanding universe. Einstein didn’t believe this result because it seemed to him that the universe was patently static and so he introduced a term into his equation (the cosmological term) to launder out a prediction that subsequently proved to be correct; we now believe the universe is far from static – as the well known cliché goes, the universe is expanding. It seems that once we believe we have twigged the logic behind the physical world we must follow that logic through and trace it to its many inexorable conclusions. Often we find that the results of that logic are against all expectation, but so often careful observation has shown the expectations to be wrong and the logic right.

So the moral of the story, it has been suggested, is that we should take seriously the laws we believe we have discovered. But not too seriously is my suggestion, for there is a balance to kept here. Consider, for example, Hooke’s law, a law telling us how materials deform when forces are applied. This simple law states that the deformation of materials is proportional to the magnitude of the force applied to them. Basically Hooke’s law gives us a straight-line graph of deformation against force and the validity of this graph can be experimentally tested, and lo and behold experimental plots fall more or less on the predicted straight line. If one takes something like the compression of a spring one finds that this compression obeys Hooke's law. However if we extrapolate the straight line graph of proportionality we find that it passes through the origin, which at once presents us with a problem: That is, Hooke’s law predicts that a finite force is capable of compressing the length of the spring to zero. Obviously this is wrong, and thus it is clear that Hooke’s law is a law applying only within limits. So, the moral of story is yes, let’s take the logic of our laws seriously but not too seriously, otherwise we might be in for a shock.

It may be objected here that Hooke’s law is not truly fundamental; that is, unlike the deep laws of physics which it is assumed have some kind of all-embracing applicability, Hooke’s law is clearly not fundamental. Trouble is, we can’t yet claim to have grasped what is truly fundamental. The difficulties of unifying Einstein’s theory of gravity with Quantum Theory does hint that neither theory is absolutely fundamental, but like all theories so far both have limits to their application and await an underlying theory of greater applicability. Note that I am careful to say here “of greater applicability” rather than “of all-embracing applicability” as I feel we should never be too presumptuous in our beliefs about having reached some kind of absolute fundamental level.

* * *

There is clearly a balance to keep here between seriousness and not-too-seriousness, but it seems we are passing through times when physics itself is in danger of loosing its serious status altogether, and this in part may be down to over-extrapolation-itus which leads to unlikely almost comical claims. In fact I recently dug out an old video recording of a Horizon program broadcast on BBC2 on the 18th December 2003 on the subject of time travel. The program was introduced as “.. sorting out science fiction from science fact.”.. and then “This is an unlikely tale about an unlikely quest. The attempt to find a way to travel through time. The cast is an unlikely one too. God, a man in a balaclava and a pizza with pretensions.” It then cut to a paranoiac Kurt Godel wearing his eye and mouthpiece balaclava which he may have worn because he believed there was a conspiracy against him. This was followed by a turquoise jacketed Professor Richard Gott of Princeton university who, as he waved his hand over a sliced pizza, solemnly declared “This is a tarm machine!” It was December 18th; April 1st was more than three months away, so the program wasn’t one of the BBC’s fools day broadcasts. So what was their excuse?

However, in spite of this bizarre beginning the program at first continued sanely enough with an account of the well-known Einsteinian type of time travel. You know the sort of thing; you leave Earth in a space ship and after a year traveling around the galaxy at near light speed you return to Earth and find that ten years have elapsed to your one-year. This is a well-established example of the “Rip Van-Winkle effect” and is not far removed from what happens when one experiences a period of unconsciousness and therefore is unaware of the passage of time as experienced by other people.

But backwards time travel is another matter. My own guess is that reverse time travel at will is impossible, but there are quite a few people out there who believe it is possible. So it was time to bring on the cheerful eccentrics, with their turquoise jackets, and paranoiac balaclavas. Professor Michio Kaku of the City University of New York prepared the way for us:

MICHIO: “Some people complain that we physicists keep coming up with weirder and weirder concepts, the reason is we are actually getting closer and closer to the truth. So if we physicists keep coming up with crazier and crazier ideas that’s because that’s the way the universe really is. The universe is crazier than any of us really expected.”
Ah! I get it! As Physics gets weirder and weirder we need weirder and weirder people to do physics, and you can’t get weirder than what now followed. The program then went to New Orleans and ferreted out an American writer called Patricia Rees who has written books on real life time travelers and who told Horizon that there are probably thousands of people doing time travel. She was in New Orleans to drop in on one of those thousands of time travelers who the program referred to as “a voyager in time called Aage Nost”. “Aage Nost claims to have his very own time machine,” said the narrator. Aage’s time machine had lights, wires, an electronic box of tricks, and two coils of wire - in other words all the things one expects a really serious time machine to have – at least in the movies. From what we saw of this time machine it looked a lot more plausible than Professor Gott’s pizza, but this machine is not for the fashion conscious because you had to wear it to work it - or at least you had to wear one coil on your head and hold the other in your hand. It looked a lot more fashionable than Kurt’s balaclava. Perhaps Aage should have built his coil into a balaclava and then the academic community might start taking him seriously. Anyway, by using this time traveling system Aage was able to tell us that in the fall of 2005 there would be an uprising in America and the military would install a new government. Did I miss that? I bet I'm in the wrong parallel universe. Shucks.

But the worst was yet to come, because we then moved onto the really serious scientists that, according Horizon, make Aage’s efforts look positively mundane. I was still trying to sort out science fiction from science fact when on came none other than the great Professor Frank Tippler who “by sheer coincidence” said the program, also lived in new Orleans. The reference to “sheer coincidence” was obviously sarcasm, because Horizon’s random sampling of the population of New Orleans clearly shows that it is full of serious Time Travelers, and this explains the apparent coincidence.

It became apparent that Tipler had revived an idea by the paranoiac Godel. Using Einstein’s equation Godel had shown that in a rotating universe time travel is possible. But it was pointed out by Frank that there was one problem here … wait for it … the universe isn’t rotating. Frankly I think Frank does have a point here because if the universe was rotating we would all feel rather dizzy. In fact the narrator put it quite subtly calling Godel’s idea “complete nonsense, for the universe we live in does not rotate”. Anyway, the brilliant Professor Tipler, suggested we simply use a much smaller spinning object, like a black hole or a spinning cylinder and he has done the necessary calculations to show that it is possible. My suggestion is that we use Frank’s arms, which revolve at great speed in opposite directions as he speaks. Problem solved.

The Turquoise Jacketed Professor Richard Gott of Princeton has other ideas,involving “cosmic strings” which Horizon pointed out “have never been observed in the real world, they're entirely theoretical”. But an undaunted Gott proceeded to show us how time travel is possible using these ontologically challenged entities. For this scene Gott was in pasta parlor and he was going to illustrate his point using one of the parlor dishes. Perfect I thought, if he is a string theorist he’s bound to order spaghetti. Wrong, he ordered a pizza, which goes to show that even string theorists have heard that spaghetti is not the only food on the menu. He then cut two slices out of the pizza, took a mouthful from a slice and then in thick bread muffled tones Gott explained his theory and ultimately uttered his famous line as his hand traced the circle of the pizza: “This is a Tarm machine!”. To prove the point he pulled out a fat comically proportioned toy space shuttle, which no doubt by sheer coincidence happened to be in the pocket of his renowned turquoise jacket. He demonstrated how the cuts in the pizza allowed his toy shuttle to make faster than light speed jumps in "space-tarm" which by extrapolating Einstein’s special theory of relativity for velocities greater than light leads to a violation of the normal sequencing of time and hence allows time travel. But there is one little snag here: As well as being based on entirely theoretical notions Gott also admitted. “[With] the tarm machine that I propose using cosmic strings, [if] you wanted to go back in tarm about a year, it would take half the mass of our galaxy”. No problem - At least in America where pizzas, like the one Gott was playing with, have galactic dimensions. Hey Richard, didn’t your Ma ever teach you not to play with your food? Just as well she didn’t, otherwise we wouldn’t have solved the time... sorry... tarm travel problem.

But there is one other spanner in the works. Because the time machines so far proposed work by cutting and warping space-time you cant go back to any time before the machine started to make these adjustments to space-time. Hence as Paul Davies pointed out on the program, if we want to go back in time to see the dinosaurs we would have to depend on “some friendly” alien lending us the time machine they made earlier, at least 65 million years earlier in fact. Once again, no problem! We know lots of friendly aliens out there, and Horizon reminded us of this by cutting to a scene from Dr. Who of invading daleks rasping out “Exterminate, Exterminate”. At this point I was having real trouble trying to sort out the science fiction from the science fiction.

* * *

After a show like this I was left wondering if anyone takes physics and physicists seriously anymore. Some of the physicists we met in this Horizon program came over as a set of good-natured brainy clowns, good for a laugh, but who are not operating in the real world: We needn’t listen to them accept perhaps to giggle at how these clever people have wormed their way into such a rarefied highfalutin world that the are of little relevance to our lives. Their time travel theories are no doubt an extremely clever albeit irrelevant lighthearted diversion, as clearly their ideas can not be implemented by any technology that is just round the corner. Bright beyond the ken of the average person they may be, but the compensating recourse to a simple almost child like humor in order to humanize them has the effect of making them even more remote. After all, playing with your food in a restaurant accompanied by a cutesy looking space shuttle does not come over as normal behaviour. And from my point of view with a little knowledge of physics, I can’t help feeling that we have here an example of the Hooke’s law fallacy: That is, by using a little over extrapolation all sorts of silly things can be proved as long as we can find the sufficiently brainy and silly people to prove them.

Having thoroughly lampooned physics and physicists it was now time for the Horizon program to move in and deliver the coup de grace for physics. It started quietly enough with Frank Tipler telling us about Moore’s Law of computing power, a law that states that every 18 months computer power doubles:

FRANK: “People realised that processing speed of computers was increasing exponentially, every year, every few years, every eighteen months the processing speed would double. … Imagine this occurring faster and faster. If that were to occur it would be possible to process an infinite amount of information”.

Yeah right Frank, using Hooke’s law I can predict that if I apply 100lbs of pressure on this spring here it will compress to zero and not only disappear (i.e. become invisible!) but might even assume a negative length, whatever that means. But OK Frank I’ll play the game; let’s imagine this extrapolation of Moore’s law for the sake of the argument. So where does it lead us? Prof David Deutsche from the center for quantum physics told us:

DAVID: “In the distant future simulating physical systems with very high accuracy so that they look perfectly real to the user of the virtual reality will become common place and trivial.”

And Dr Nick Bostrum of Oxford University continues the argument to its conclusion.

NICK: “So imagine an advanced civilisation and suppose that they want to visit the past it might turn out not to be possible to build a time machine and actually go back into the past, physics might simply not permit that. There is a second way in which they could get the experience of living in the past and that would be by creating a very detailed and realistic simulation of the past….. An advanced civilisation would have enough computing power that even if it devoted only a tiny fraction of one percent of that computing power for just one second in the course of its maybe thousand years long existence, that would be enough to create billions and billions of ancestor simulations. There would be a lot more simulated people like you than there would be original non-simulated ones. And then you’ve got to think, hang on, if almost everybody like me are simulated people and just a tiny minority are non-simulated ones then I am probably one of the simulated ones rather than one of the exceptional non-simulated ones. In other words you are almost certainly living in an ancestor simulation right now.”

David Deutsche gave us the implications for physics itself:

DAVID: “From the point of view of science it’s a catastrophic idea, the purpose of science is to understand reality. If we’re living in a virtual reality we are forever barred from understanding nature.”

And Paul Davies hints at the frightening philosophical specters that now haunt physics as a result:

PAUL: “The better the simulation gets the harder it would to be able to tell whether or not you were in a simulation or in the real thing, whether you live in a fake universe or a real universe and indeed the distinction between what is real and what is fake would simply evaporate away…..Our investigation of the nature of time has lead inevitably to question the nature of reality and it would be a true irony if the culmination of this great scientific story was to undermine the very existence of the whole enterprise and indeed the existence of the rational universe.”

Let me broach some of the cluster of philosophical conundrums raised by this embarrassing debacle that physics now faces.

Why should our concept of a simulated reality be applicable to the deep future? Doesn’t it rather presume that the hypothetical super beings have any need for computers? The existence of computers is partly motivated by our own mental limitations – would a super intelligence have such limitations? Or perhaps these simulating computers ARE the super intelligences of the future. But then why would they want to think of us primitives from the past? Another problem: Doesn’t chaos and the absolute randomness of Quantum Mechanics render anything other than a general knowledge of the past impossible? In that case this means that any simulated beings would in fact be arbitrary creations, just one evolutionary scenario, a mere possible history, but not necessarily the actual history. And overlying the whole of this simulation argument is the ever-unsettling question of consciousness: Namely, does consciousness consist entirely in the formal relationships between the informational tokens in a machine?

But even if we assume that the right formal mental structures are sufficient condition for conscious sentience, the problems just get deeper. If physics is a science whose remit is to describe the underlying patterns that successfully embed our observations of the universe into an integrated mathematical structure, then physics is unable to deliver on anything about the “deeper” nature of the matrix on which those experiences and mathematical relations are realized. Thus, whatever the nature of this matrix, our experiences and the associated mathematical theories that integrate them ARE physics. If we surmise that our experiences and theories are a product of a simulation, physics cannot reach beyond itself and reveal anything about its simulating context. The ostensible aspects of the surmised simulation (that is, what the simulations delivers to our perceptions) IS our reality: As Paul Davies observed, “… indeed the distinction between what is real and what is fake would simply evaporate away”. Moreover, if physics is merely the experiences and underlying mathematical patterns delivered to us by a simulation how can we then reliably extrapolate using that “fake” physics to draw any conclusions about the hypothetical “real physics” of the computational matrix on which we and our ‘fake’ physics are being realized? In fact is it even meaningful to talk about this completely unknown simulating world? As far as we are concerned the nature of that world could be beyond comprehension and the whole caboodle of our ‘fake’ physical law, with its ‘fake’ evolutionary history and what have you, may simply not apply to the outer context that hosts our ‘fake’ world. That outer realm may as well be the realm of the gods. Did I just say “gods”? Could I have meant … ssshh … God?

The root of the problem here is, I believe, a deep potential contradiction in contemporary thinking that has at last surfaced. If the impersonal elementa of physics (spaces, particles, strings, laws and what have you) are conceived to be the ultimate/primary reality, then this philosophy, (a philosophy I refer to as elemental materialism) conceals a contradiction. For it imposes primary and ultimate reality on physical elementa and these stripped down entities carry no logical guarantee as to the correctness and completeness of human perceptions. Consequently there is no reason, on this view, why physical scenarios should not exist where human perceptions as to the real state of affairs are wholly misleading, thus calling into question our access to real physics. Hence, a contradictory self referential loop develops as follows: The philosophy of elemental materialism interprets physics to mean that material elementa are primary, but this in turn has lead us to the conclusion that our conception of physics could well be misleading. But if that is true how can we be so sure that our conception of physics, which has lead us to this very conclusion, is itself correct?

There is one way of breaking this unstable conceptual feedback cycle. In my youthful idealistic days I was very attracted to positivism. It seemed to me a pure and unadulterated form of thinking because it doesn’t allow one to go beyond one’s observations and any associated integrating mathematical structures; it was a pristine philosophy uncontaminated by the exotic and arbitrary elaborations of metaphysics. For example, a simulated reality conveying a wholly misleading picture of reality cannot be constructed because in positivism reality is the sum of our observations and the mental interpretive structures in which we embed them - there is nothing beyond these other than speculative metaphysics. However, strict positivism is counterintuitive in the encounter with other minds, history, and even one’s own historical experiences. In any case those “interpretative structures”, as do the principles of positivism, look themselves rather metaphysical. Hence, I reluctantly abandoned positivism in its raw form. Moreover the positivism of Hume subtly subverts itself as a consequence of the centrality of the sentient observer in its scheme; if there is one observer, (namely one’s self) then clearly there may be other unobserved observers and perhaps even that ultimate observer, God Himself. Whatever the deficiencies of positivism I was nevertheless left with a feeling that somehow sentient agents of observation and their ability to interpret those observations have a primary cosmic role; for without them I just couldn’t make sense of the elementa of physics as these are abstractions and as such can only be hosted in the minds of the sentient beings that use them to make sense of experience. This in turn lead me into a kind of idealism where the elementa of science are seen as meaningless if isolated from a-priori thinking cognitive agents in whose minds they are constructed. In consequence, a complex mind of some all embracing kind is the a-priori feature that must be assumed to give elementa a full-blown cosmic existence. Reality demands the primacy of an up and running complex sentience in order to make sense of and underwrite the existence of its most simple parts; particles, spaces, fields etc – these are the small fish that swim in the rarefied ocean of mind. This philosophy, for me, ultimately leads into a self-affirming theism rather than a self-contradictory elemental materialism.

The popular mind is beginning to perceive that physics has lost its way: University physics departments are closing in step with the public’s perception of physics as the playground for brainy offbeat eccentrics. My own feeling is that physics has little chance of finding its way whilst it is cut adrift from theism, and science in general has become a victim of nihilism. The negative attitude toward science, which underlies this nihilism, is not really new. As H. G. Wells once wrote:

"Science is a match that man has just got alight. He thought he was in a room - in moments of devotion, a temple - and that this light would be reflected from and display walls inscribed with wonderful secrets and pillars carved with philosophical systems wrought into harmony. It is a curious sensation, now that the preliminary splutter is over and the flame burns up clear, to see his hands lit and just a glimpse of himself and the patch he stands on visible, and around him, in place of all that human comfort and beauty he anticipated - darkness still."

Wells tragically lost his faith and with it his hope and expectation: He no longer believed the Universe to be a Temple on the grandest of scales, but rather a place like Hell, a Morlockian underworld with walls of impenetrable blackness. In that blackness Lovecraftian monsters may lurk. Nightmares and waking life became inextricably mixed. And in this cognitive debacle science could not be trusted to reveal secrets or to be on our side. The seeds of postmodern pessimism go a long way back.

But we now have the final irony. The concluding words of the Horizon narrator were:

"Now we’re told we may not even be real. Instead we may merely be part of a computer program, our free will as Newton suggested is probably an illusion. And just to rub it in, we are being controlled by a super intelligent superior being, who is after all the master of time."

The notions that we are being simulated in the mind of some super intelligence, that a naïve concept of free will is illusory, that we can know nothing of this simulating sentience unless that super intelligence should deign to break in and reveal itself are all somehow very familiar old themes:

“….indeed He is not far from each of us, for in Him we live and move and have our being…” (Acts 17:27-28)

My frame was not hidden from You when I was made in the secret place. When I was woven together in the depths of the earth., Your eyes saw my unformed body. All the days ordained for me were written in Your book before one of them came to be” (Ps 139:15&16)

“…no one knows the Father except the Son and those to whom the Son chooses to reveal Him.” (Mat 11:27)

Have those harmless but brainy eccentric scientists brought us back to God? If they have, then in a weird religious sort of way they have sacrificed the absolute status of physics in the process.

Thursday, January 18, 2007

The Goldilocks Enigma

The following is some correspondence I had with Paul Davies on his latest book “The Goldilocks Enigma” (Penguin/Allen Lane 2006)

Dear Paul,
I read your new book “The Goldilocks Enigma” over Christmas with great interest. I certainly found it a very informative and helpful survey of the latest ideas in this fascinating area – so interesting that I actually did an entry in my blog about it. I am particularly fascinated by your ideas on self-referencing necessity being found within the cosmos (I’m a theist so I have always automatically thought of necessity as lying outside the cosmos). Also, I very much feel that there is something in your conjecture that observers are not just incidental to the cosmic set up, but somehow imbue meaning and reality to the cosmos.

Your comments on Young’s slits have prompted me to have another think about this experiment. My understanding of the experiment in fig 32 on page 278 of your book is as follows: The “telescopes” do not detect a fringe pattern even when the object end of the “telescope” is placed in a dark area of the fringe pattern, because the wave from the sighted slit will travel right through the dark node and into the “tube” of the detector where, according to the respective probabilities, the state vector may “collapse” into a “detection state”. In this case, the tubular shape of the detector, if of sufficient aperture, blocks entry of waves from the other slit and so no fringe pattern will be observed. The three dimensional nature of the wave field means that it is effected by the three dimensional configuration of experimental set-ups even if those set-ups change at the last moment. True, the particle wave field is a rather mysterious entity as is the discontinuous swappings of the state vector but I hadn’t up until now thought them to be governed by anything other than conventional “forward” causation (Neglecting the effect of relativity in conjunction with state vector changes). In fact if you increase the wavelength sufficiently, then the wave from the other slit will “get into” the “telescope” and interference patterns will be re-established.

However, what I have said above may be entirely an artifact introduced by my own view of quantum mechanics. I tend to think only in terms of waves and discontinuous changes of the state vector. I don’t think in terms of particles: I see particles as an approximation brought about as a result of cases where the state vector swaps to a localized form, thus giving the impression of “particles”. This perspective on QM tends to expunge teleology. But having said that I must admit that you have prompted me think again here: if one envisages a particle model of reality then the teleological issue does arise. Moreover, one might see in the telescope detector a more complex and therefore “more conscious” piece of apparatus than just a screen. So perhaps our own very sophisticated sentience acts as a “detector” that somehow removes spatial ambiguities even in past states and thus imbues them with greater spatial reality! But then according to the uncertainty principle, less spatial ambiguity is complementary with greater dynamic ambiguity, so the more we are aware of what something is spatially the less aware we are of what it is becoming.

The moral of the story may be that artifacts in one’s perspective have a bearing. As we know, Newtonian dynamics can be developed using the “teleological” looking extremal principles. But, of course, these are mathematically equivalent to the conventional view that sees one event leading to another in sequence without recourse to end results. It is almost as if the choice of interpretation on the meaning of things is ours to make! Thus, perhaps the way we personally interpret the cosmos constitutes a kind of test that sorts out the sheep from the goats! Which are you? Some theists (but not me, I must add!) probably think you are a goat, but then some atheists probably have the same opinion! Can’t win can you?
Tim Reeves

Dear Tim,
Thank you for your thoughtful interpretation of the delayed choice experiment. I believe the teleological component in quantum mechanics is qualitatively quite distinct from the extremal principle of classical mechanics. One can formulate QM in that language too (via Feynman path integrals), but that concerns only the propagation of the wave function. The key point about the delayed choice experiment is the measurement (or collapse of the wave function), at which point the dynamics changes fundamentally. It is not the measurement per se that introduces the teleology, but the choice of which experimental configuration to use - a subtlety that lies at the heart of Wheeler's "meaning circuit." Correct though your observations of aperture diffraction etc. may be, I don't think the specific details of the telescope design and operation are germane to the central issue here. It is more a matter of whether one makes "this" sort of measurement, or "that." Complications with the telescope optics may produce "don't know" answers, but these can be filtered out.

I hate being pigeonholed, so I won't respond to the sheep/goats question.
With regards,

Paul Davies
(The above correspondence has been published with permission)