Thursday, April 24, 2008

In No Man's land

I have been studying the following ID Internet material: This, this, and this. In due course I intend to reply here.

If I were to keep within the terms of reference of the latter link then I would confine my efforts purely to question of whether or not the Second Law of thermodynamics contradicts evolution. However, like their opponents in the atheist corner the ID community has a tendency to burn their boats and have little appetite for concessions; They have committed themselves to the view that the Second Law is inconsistent with evolution. The no holds barred battle that is raging between the ID community and the evolutionists is making life difficult – it’s like going bird watching in a world war I trench zone.

Wednesday, April 02, 2008

Uncommon Descent and Common Decency

I have been deeply in discussion with the ID theorists on Uncommon Descent. See here, here, here, and here. I have endeavored to keep things as cordial and decent as possible, although a commentator who calls himself KairosFocus on the last link proved to be a bit touchy at first. He has done a lot of work on thermodynamics issues and has some worthy points to grapple with; a subject on which I breezed in confidently and presumed to pronounce. Moreover, given that like other ID theorists his name has been kicked around, this is understandable.

What I do find difficult to handle is the lack of detachment one finds on both sides of the creation/evolution debate, with emotions running high and insults filling the web pages. The reasons for this I think are clear: The atheists have strongly identified themselves with evolution because uncomfortable and tricky questions about the primary ontology of the cosmos (which so easily lead on to God) can then be at least deferred if not cleared off the table altogether. The ID theists on the other hand, with their concept of the direct intervention of what they evasively call ‘intelligence’, are being both affectively non-committal and in-yer-face at the same to time (and of course everyone knows what they are really talking about). They have effectively dragged God off the philosophical back shelf into the foreground and spotlight of scientific polemic, thus challenging many atheists in their own preferred field: and they don’t like it! Sometimes the ID theorists look like chickens running around with targets on their backs! You’ve got to hand it to them; they are courageous!

I don’t particularly want to make enemies of reasonable, fair and intelligent proponents on either side of the debate, but I suppose that’s just too much to ask. Both sides are looking for someone to shoot at and if you have an intermediate position like mine you have to do some pretty nifty diplomatic footwork to miss the bullets from either side.

As a theist I don’t find myself loosing sleep or getting uptight about whatever side of the reasoned argument is prevailing. With no particular vested interest it makes me a pretty cool customer. I personally regard that as a good thing: emotions can cloud one’s judgment and self-awareness. And yet on the other hand, like fuel, it is emotion that keeps one flying high.

Sunday, February 17, 2008

Chocks Away!

Right, after several months on atheist Larry Moran’s blog getting up to speed on evolution I’ve at last decided to invade the “opposition’s” airspace! See this link, and especially this link, for results.

I’m not really an antagonist; some argue for either evolution or ID as if their life depends on it. In my case I’m not hunting down enemies; I’m hunting down answers – if there are any to be found! But nevertheless, the ironies are profound.

Wednesday, February 06, 2008

The Black Swan

I’ve recently finished reading this book by Nassim Nicholas Taleb. It left a favourable impression with me and so I’m glad to see that it has become a best seller. The book exposes the conceits and false confidences in our ability to understand and/or predict that may accompany what are effectively toy town theoretical renderings of complex realities. One commentator suggested that Taleb’s book is repetitive, but given how easily human beings fall into the trap of what Taleb calls ‘epistemic arrogance’, the lesson needs to repeated over and again.

There is far more right with this book than there is wrong with it, but if I were to engage it critically I would want to delve a bit deeper into the following matters:

The Gaussian Bell curve: This curve is a product of the random sequence, the latter being a limiting form of complexity and therefore of fundamental interest. However, I think that Taleb is nevertheless right here: perfect randomness (and therefore the gaussian bell curve) has a limited application in our world. Our world exists somewhere in that vast trackless domain between high order and maximum disorder.

Fractal mathematics and the scalable power law treatment of ‘negative black swans’, like disasters: Fractal scalability only seems to exist in relatively small logarithmic ranges. Taleb is of course is aware of this, but suggests that the limits of scalability are academic if we live inside the scalable zone.

Retrospective explanatory narratives: Taleb pours scorn on those who rationalize things after the event. My own feeling is that the ‘NP’ like structure of some domains of study allows outcomes, like riddles, to be correctly rationalized only after the event. But on the whole I suspect Taleb is right again: retrospective analysis, like the Gaussian bell curve, sometimes works; often, however, it may simply amount to constellation spotting.

Theoretical Narrative: There is nothing wrong in theorizing per se: it’s just that this activity must proceed with caution and humility, always self-aware on an acutely self-critical meta-science level: how and why do we know we know?

The Cosmos is not sufficiently ordered for it to be universally amenable to elementary mathematical models and yet it is not disordered enough for comprehensive statistical treatment. Mathematical probability might seem to be a way of dealing with uncertainty, but it only works if the possible cases can be enumerated. Unlike games of chance our world is open ended with a space of possible cases of staggering dimension and unknown limit. Therefore formal probability cannot be used to quantify the unknown or provide a handle on those surprising ‘one-off’ events that hit us again and again. To attempt to do so is what Taleb calls the ‘Ludic fallacy’

Taleb is brash and abrasive about ‘empty suit’ forecasters and the ‘nerd knowledge’ of academia. This will undoubtedly upset those people who know they are being targeted. It was easy for me to laugh heartily at Taleb's amusing and quite extreme jibes and anecdotes, because I'm not the target. But those whose ivory tower positions have given them such a breath taking complacency that they think it is in order to bully and insult us into their beliefs have brought Taleb the Tornado on themselves. On balance I would rather we had people around like Taleb to challenge an endemic epistemic attitude problem than live in comfortable denial. If we take Taleb’s lesson to heart we will be better emotionally adjusted for the next big shock.

Monday, January 28, 2008

The Intelligent Design Contention: Part4

Casey Luskin
In this web article Casey Luskin, Intelligent Design apologist, responds with a comprehensive rebuttal of America’s National Academy of Science’s negative assessment of ID. The article exploits the self-criticizing quotes of evolutionary theorists who candidly admit weaknesses and gaps in the theory in relation to a variety of outstanding problems: the inconsistencies in trying to build phylogenetic trees, the poor state of abiogenesis, problems in relating evolution to the fossil record, and the patchy evidence of hominid evolution. For me these problems are not unexpected given the nature of beast – evolution is one of those big historical theories dealing with a complex ontological category. My own gut feeling is that evolutionary theory, given what it is trying achieve, does a good job of achieving it – that of linking a diversity of observations into one theoretical framework. It will, of course, never be so sure footed as say celestial mechanics, or atomic theory, simply because it deals with such epistemologically intractable objects as deep time and, it hardly needs be said, the most complex objects we know of – living things. It is unfair, as does Luskin in his article, to make comparison with the much simpler objects that physics studies. Evolution is certainly not less well founded than some of the other epistemologically tricky things I believe in like, for example, historical narratives, middle of the road socialism, the constitutional monarchy, or theism. I take evolution and evolutionists seriously. It’s one thing to criticize evolution, but it’s quite another to attempt to advance an alternative theory that is as successful.

But I also take Luskin and his fellow ID proponents seriously and especially that key concept of ID, irreducible complexity. In this series of blogs I will be using my blog as a kind of sounding board to help develop my abstract ideas in relation to this key ID notion. In ID theory, if theory it is, the whole edifice of intelligent design stands or falls by the notion of irreducible complexity. Hence it is this concept that I want to focus my efforts on here.

In his article Luskin addresses the NAS treatment of irreducible complexity. He answers the sort of criticism of irreducible complexity that we find in the video of Ken Miller which I posted in my last entry on this subject:

Dembski …. recognizes that Darwinists wrongly characterizes irreducible complexity as focusing on the non-functionality of sub-parts. Conversely, pro-ID biochemist Michael Behe, who popularized the term “irreducible complexity,” properly tests it by assessing the plausibility of the entire functional system to assemble in a step-wise fashion, even if sub-parts can have functions outside of the final system. The “leap” required by going from one functional sub-part to the entire functional system is indicative of the degree of irreducible complexity in a system. Contrary to the NAS’s assertions, Behe never argued that irreducible complexity mandates that sub-parts can have no function outside of the final system.

Luskin also quotes Michael Behe the “Galileo” of ID who explains why the existence of the Type Three Secretory System, as a precursor of the E. Coli flagellum proves little:

At best the Type Three Secretory System represents one possible step in the indirect Darwinian evolution of the bacterial flagellum. But that still wouldn’t constitute a solution to the evolution of the bacterial flagellum. What’s needed is a complete evolutionary path and not merely a possible oasis along the way. To claim otherwise is like saying we can travel by foot from Los Angeles to Tokyo because we’ve discovered the Hawaiian Islands. Evolutionary biology needs to do better than that.


Our ignorance of the structure of morphospace is cutting both ways here: Behe is claiming that absence of evidence of possible evolutionary routes in morphospace is evidence of absence. On the other hand evolutionists claim that absence of evidence of these evolutionary paths is not evidence of absence. Of course, neither party has provided killer evidence either way. So who has the edge here? The claims of ID theorists and evolutionists are, logically speaking, in complementary opposition rather than symmetrical opposition. It is surely an irony that of the two sides ID, in an elementary Popperian sense, is ostensively making the more easily refutable claims: ID is stating a quasi-universal, namely that evolutionary paths to the working structures like the flagellum of E. coli don’t exist – all we need to do is find one route and the proposition is ‘falsified’. Evolutionists, on the hand are making an existential statement; they are claiming that the routes do exist: such statements can’t be falsified, but they can be ‘verified’ by just one case - if they find that one case their claim is ‘proved’. But both sides have their work cut out; the sheer size and complexity of morphospace makes it a little more difficult to investigate than the Pacific! Moreover, the ontological complexities of what is basically a historical subject will no doubt scupper any claims of either absolute falsification or verification and at best only evidence tipping the balance in one direction or the other is likely to be found.

However, having said that the existence of “islands” such as the TTSS do start to weaken the ID case: the ‘irreducible complexity’ of the flagellum structure is thus less irreducible than we were lead to believe; the TTSS sets a precedent that starts to erode blanket claims that evolutionary paths don’t exist. If Behe is so demanding as to require evolutionists to map out a full path, then it is only fair that evolutionists are as equally demanding and require ID theorists to show that such paths don’t exist. So given that neither party can easily provide a suite of evidence that comprehensively proves their case, we have to plump for partial evidence weighing the case in either direction. Thus, in this weaker sense the TTSS is evidence in favor of evolution, whatever Behe says.

Since the ID theorists have come up with no absolute proof that evolutionary routes to their ‘irreducibly complex’ structures don’t exist, they are forced to sit with a passive uneasiness hoping that such paths don’t pop out of the scientific woodwork. There is a marked difference in the strategic positions of the opposing sides: ID theorists are in passive defense, hoping that evolutionary paths through morphospace will not be found. Evolutionist, on the other hand have the initiative – they can imaginatively and proactively search for solutions – and who knows, they may yet come up with the goods. Frankly from a strategic point of view I would rather be an evolutionist.


One final question that Luskin addresses is this: Is ID science? Luskin says, Yes, of course it is! True, the notion of irreducible complexity as a bare idea is an intelligible notion that can be investigated empirically and logically, although as it deals with the structure of that complex beast we call morphospace, scientists have their work cut out. However, it is when ID theorists jump from irreducible complexity to the operation of intelligence, that the waters start to muddy. When we discover an archeological object that looks like an artifact the working assumption is that it is human intelligence has been at work. We can test this assumption in relation to the known traits of human beings. But ID does not identify the source of the artifaction and in spite of using the scientific sounding term ‘Intelligent Design’, that intelligence is a wild card: it is a naked intelligence of unknown powers and motives. This throws into sharp relief such questions as: What is the nature of intelligence? How does intelligence achieve what it does? Why is intelligence needed to create certain structures? It is these sorts of questions I hope to probe and make some progress with in this series of posts.

Monday, January 14, 2008

The Intelligent Design Contention: Part 3

Ken Miller
Here is a video of a lecture from the biologist Ken Miller (see picture) speaking out against Intelligent Design. Miller is a Catholic and evolutionist. I don’t fully agree with all that he says about the nature of science: I don’t think a clear cut distinction can be made between the ‘natural’ and ‘supernatural’, with science only dealing in the former; the ontological and epistemology categories in our world are too blended for us to sharply partition the world in this way. However, the philosophy of science isn’t the issue I wish to pursue here. In this connection I am more interested in Miller’s comments on the ‘science’ of ID. Miller uses his area of expertise in biology to effect when he successfully challenges the ID notion of irreducible complexity, especially in connection with the E. Coli flagellum, the immune system and the blood-clotting cascade. He shows the ID view that parts of these systems are of no use by themselves to be false. He also makes some very notable observations on the tactics being used by the ID community.

Miller makes a particularly compelling point (apparently one of the points that helped carry the Dover trial ruling against ID) when he suggests that the introduction of ID into classrooms under the rubric ‘Teach the Controvesy’ creates a false dichotomy between science and religion; That is, it frames the debate to look as though it is ‘naturalistic’ evolution vs. ‘supernatural’ creation by God with, of course, ID coming in on the side of God against those who, like myself, favour evolution. Ironically, many atheists would agree with this framing. This is typical, typical of so much evangelicalism – lines are drawn in order to define the ‘view of the righteous’ and wo betide you if you find yourself on the wrong side of the line. In this way spiritual duress is applied and this has the effect of pressuring believers to fall into line.

Not that Miller’s own denomination doesn’t have a history of pressuring believers to fall into line. Although I am very uncomfortable with centrally controlled religion, in this case Miller’s denomination is reaping a benefit: If the leadership just happen to take a reasonable view on an issue this can make itself felt all the way down the line (but, of course, the reverse happens as well!). In contrast evangelicalism is highly fragmented, with local fanatics often securing and commanding ignorant and gullible followings.

Friday, January 04, 2008

The Intelligent Design Contention: Part 2

Intelligent Design Examples
The de-facto symbol, rallying emblem, and prototype of the Intelligent Design School is the propulsion flagellum of the Bacteria Escherichia coli. A complex motor like construction delivers the revolutions to the flagellum. The ID case may derive some of its compelling quality from diagrammatic representations of the flagellum drive. These diagrams (see picture) show a structure with rotary components that has at least a superficial resemblance to a piece of human engineering. This resemblance may help elicit the intuitive gut reaction “An inventor must have designed and made it!”. Just looking at representations of the flagellum drive sets the mind up to be more susceptible to the ID contention that this structure of harmonized components must have come together in one grand slam creative act overseen by an Intelligent Designer. Clearly this designer knew all about wheel bearings long before humans were rolling stone monuments of Eygptian kings on logs. Inventors, particularly male inventors, have never been able to get the invention of the wheel out of their heads and when they see a wheel they know there must be a like mind around somewhere.
Other examples of biological engineering used to promote ID are the blood clotting cascade and the immune system (The arguments ‘for and against’ here can be locked onto using Google). These molecular level systems do not just depend on the production of a single protein but consist of a molecular 'industrial process' like production line of inter-dependent proteins that achieves the required result. Remove one protein from this production line and the functionality of the system is severely compromised. Thus, if vital biological systems like blood clotting and the immune system fail to function for want of a single component, then the organism hosting these substructures becomes unstable and dies. In the abstract the ID argument is this: how could all these parts have come together without intelligence? For clearly, ID theorists argue, they must have come together as a whole because removal of any one part leads to failure of function and death. These systems, they claim, cannot be made any simpler; that is, they cannot be reduced – they are ‘irreducibly complex’. The two ‘big name’ Christian theorists associated with the defense of the concept of irreducible complexity are Michael Behe and William Dembski. In some Christian circles they are megastars, Davids fighting courageously against the Goliaths of evolutionary theory.

I recently heard about another ID theorist whilst reading reading Sandwalk, the blog of atheist Larry Moran. Sandwalk reported (critically, of course) on the work of Canadian ID theorist Kirk Durston who is researching proteins, the active chemical ‘bricks’ of living things. To carry out their function the long molecular strands comprising protein molecules must be folded into appropriate shapes. According to Durston there comes a point when incremental changes in the molecular sequence of proteins completely disrupts their folding, thereby making them non-functioning. Durston is claiming that if changes in the molecular structure of proteins go beyond a certain magnitude threshold they cannot do their job. Once again the ID case rests on the difficulty of conceiving how certain biological structures could have come about except as complete up and running systems. The concept of irreducible complexity does not recognize half-measure structures – structures either work on they don’t. Biological solutions, it is claimed, have no sense of nearness or vicinity attached to them – they have to be either bang on target or they are a complete miss.

The ID vs. Darwin debate usually centers round specific organic examples like the prototypes I have given above. There seems to be a reason for this example-by-example treatment. Morphospace is a colossal and hugely complex platonic construction, highly inhomogeneous in its possibilities, and embracing objects of different types and levels – from atomic configurations that make up proteins, through the molecular reactions of protein production lines, to the micro engineering of E. Coli. One could of course, introduce even higher types – e.g. fully blown organisms or an ant’s nest (which is effectively a structure made of many individual organisms). Another tricky facet of morphospace is that environment has a critical bearing on the stability of structures; a structure that is stable in one environment might be unstable in another. Also, a structure may effect or become part of an environment thus giving rise to the non-linear effects of feedback. Moreover, strictly speaking mophospace doesn’t just include biological structures, but just about everything that we can conceive of, and more, that can be made from atomic matter; this even includes human artifacts like a bar of soap, a house, Lego, a jet fighter, a computer, or a Von-Neumann machine. The class of objects covered by morphospace are so varied in typing and level, with so many unknown degrees of freedom, so open ended in functional possibilities that general analytical treatment of this ‘space’ seems to be beyond the wit of man and hence the need to dwell on the specific rather than the general.
.
But in spite of all this ID theorists are committed to the notion that in critical biological regions morphospace possesses a feature that is a barrier to evolution, or at least in the examples they constant hark back to: they see these example biological structures standing isolated in a kind of design vacuum; that is, they are not surrounded by lower ‘marks’ or similar structures that could be part of an equally stable nexus. Thus, according to ID theory they have no stable neighboring structures that evolutionary gradualism could have passed through en route to the ‘final design’. ID theory therefore swings on the assumed disconnectedness of the regions of structural stability in morphospace. This is the rather brave and quite possibly wrong assumption on which ID theory rests, but it seems that the conceptual intractability of morhpospace, so vast in its ramifying possibilities and typing, makes it difficult for evolutionists to refute this assumption with one-liners. The result is much frustration, annoyance and abrasive dialogue.

Monday, December 31, 2007

Fireside Problem

Over the Christmas period I have been grappling with a problem that has been hanging around with me for some time.
Now, it is possible to generate highly disordered sequences of 1s and 0s using an algorithm. An algorithm is also expressible as a sequence of 1s and 0s. However, whereas disordered sequences may be long – say of length Lr, the length of the algorithm sequence generating it, Lp, may be short. That is Lr is less than Lp. The number of possible algorithm sequences will be 2^Lr. The number of possible disordered sequences will be nearly equal to 2^Lp. So, because 2^Lp is much less than 2^Lr, then it is clear that short algorithms can only generate a very limited number of all the possible disordered sequences. For me, this result has a counter intuitive consequence: It suggests that there is a small class of disordered sequences that have a special status - namely the class of disordered sequences that can be generated algorithmically. But why should a particular class of disordered sequences be so mathematically favoured when in one sense every disordered sequence is like every other disordered sequence in that they all have the same statistical profile? My intuitions suggest that all disordered sequences are in some sense mathematically equal, and yet it seems that algorithms confer a special status on a small class of these sequences.
I think I now know where the answer to this intuitive contradiction lies. To answer it we have to go back to the network view of algorithmic change. If we take a computer like a Turing machine then it seems that it wires the class of all binary sequences into a network in a particular way, and it is the bias introduced by the network wiring that leads to certain disordered configurations being apparently favored. It is possible to wire the network together in other ways that would favour another class of disordered sequences. In short the determining variable isn’t the algorithm, but the network wring, which is a function of the computing model being used. It is the computing model inherent in the network wiring that has as much degree of freedom as does a sequence as long as Lr. Thus, in as much as any disordered sequence can have an algorithmically favoured position depending on the network wiring used by the computing model, then in that sense no disordered sequence is absolutely favoured over any other.

Well, that’ll have to do for now; I suppose I had better get back to the Intelligent Design Contention.

Wednesday, December 12, 2007

The Intelligent Design Contention: Part 1

The Contention
For some months now I have been reading Sandwalk, the blog of atheist Larry Moran, or as he dubs himself ‘A Skeptical Biochemist’: (He can’t be that skeptical otherwise he would apply his skepticism to atheism and graduate from ‘skeptic’ to ‘doubter’.) One reason for going to his blog was to help me get a handle on the evolution versus Intelligent Design (ID) debate. The ID case hinges on a central issue that I describe in this post.

The ID contention is, in the abstract, this: If we take any complex organized system consisting of a set of mutually harmonized parts that by virtue of that harmony forms a stable system, it seems (and I stress seems) that any small changes in the system severely comprises the stability of that system. If these small changes lead to a break down in stability how could the system have evolved given that evolution is a process requiring that such systems be arrived at by a series of incremental changes?

Complex organized systems of mutually harmonized components are termed ‘Irreducibly Complex’ by ID theorists. The term ‘irreducible’ in this context refers, I assume, to the fact that apparently any attempt to make the system incrementally simpler by say removing or changing a component, results in severe malfunction and in turn this jeopardizes the stability of the system. If the apparent import of this is followed through then it follows that there are no ‘stable exit routes’ by which the system can be simplified without compromising stability. If there are no ‘stable exit routes’ then there are no ‘stable entry routes’ by which an evolutionary process can ‘enter’.

Mathematically expressed:

Stable incremental routes out = stable incremental routes in = ZERO.

In the ID view many biological structures stand in unreachable isolation, surrounded by a barrier of evolutionary 'non-computability'. Having believed they have got to this point ID theorists are then inclined to make a three fold leap in their reasoning: 1) They reason that at least some aspects of complex stable systems of mutually harmonized parts have to be contrived all of piece; that is, in one fell swoop as a fait accompli. 2) That the only agency capable of creating these designs in such a manner requires intelligence as a ‘given’. 3) That this intelligence is to be identified with God.

I get a bad feeling about all this. Once again I suspect that evangelicalism is unrinating up the wrong lamp post. Although the spiritual attitudes of the ID theorists look a lot better than some of the redneck Young Earth Creationists I still feel very uneasy. So much that one is supposed to accept under the umbrella of evangelicalism is often administered with subtle and sometimes not so subtle hints that one is engaged in spiritual compromise if one doesn’t accept what is being offered. I hope that at least ID theory will not become bound up with those who apply spiritual duress to doubters.

Wednesday, November 14, 2007

Generating Randomness

I have been struggling to relate algorithmics to the notion of disorder. In my last post on this subject I suggested that an algorithm that cuts a highly ordered path through a network of incremental configuration changes in order to eventually generate randomness would require a ‘very large’ number of steps in order to get through the network. This result still stands, and it probably applies to a simple algorithm like the counting algorithm, which can be modeled using simple recursive searching of a binary tree network.

But, and here’s the big but, a simple counting algorithm does not respond to the information in the current results string (the results string is the equivalent of the tape in a Turing machine). When an algorithm starts receiving feedback from its ‘tape’, then things change considerably – we then move into the realm of non-linearity.

Given the ‘feedback’ effect and using some assumptions about just what a Turing machine could at best achieve I have obtained a crude formula giving an upper limit on the speed with which a Turing machine could development a random sequence. A random sequence would require at least ‘t’ steps to develop where:

t ~ kDn/log(m)
and where ‘k’ is a constant whose factors I am still pondering, ‘D’ is the length of the sequence, ‘n’ is an arbitrarily chosen segment length whose value depends on the quality of randomness required, and ‘m’ is the number of Turing machine states. This formula is very rough, but it makes use of a frequency profile notion of randomness I have developed elsewhere. The thing to note is that the non-linearity of algorithmics (a consequence of using conditionals) means that the high complexity of randomness could conceivably be developed in linear time. This result was a bit of a surprise as I have never been able to disprove the notion that the high complexity of randomness requires exponential time to develop it – a notion not supported by the speed with which ‘random’ sequences can be generated in computers. Paradoxically, the high complexity of randomness (measured in terms of its frequency profile) may not be computationally complex. On the other hand it may be that what truly makes randomness computationally complex is that for high a quality randomness D and n are required to become ‘exponentially’ large in order to obtain a random sequence with a ‘good’ frequency profile.

Thursday, September 27, 2007

The Virtual World

Why don’t I accept the multiverse theory, I have to ask myself. One reason is that I interpret quantum envelopes to be what they appear to be: namely, as objects very closely analogous to the guassian envelopes of random walk. Guassian random walk envelopes are not naturally interpreted as a product of an ensemble of bifurcating and multiplying particles (although this is, admittedly, a possible interpretation) but rather a measure of information about a single particle. ‘Collapses’ of the guassian envelope are brought about by changes in information on the whereabouts of the particle. I see parallels between this notion of ‘collapse’ and quantum wave collapse. However, I don’t accept the Copenhagen view that sudden jumps in the “state vector” are conditioned by the presence of a conscious observer. My guess is that the presence of matter, whether in the form of a human observer or other material configurations (such as a measuring device) are capable of bringing about these discontinuous jumps. (Note to self: The probability of a switch from state w) to state v) is given by (wv) and this expression looks suspiciously like an ‘intersection’ probability.)

The foregoing also explains why I’m loath to accept the decoherence theory of measurement: this is another theory which dispenses with literal collapses because it suggests that they are only an apparent phenomenon in as much as they are an artifact of the complex wave interactions of macroscopic matter. Once again this seems to me to ignore the big hint provided by the parallels with random walk. The latter lead me to view wave function ‘collapse’ as something closely analogous to the changes of information which take place when one locates an object; the envelopes of random walk can change discontinuously in a way that is not subject to the physical strictures on the speed of light and likewise for quantum envelopes. My guess is that the ontology of the universe is not one of literal particles, but is rather an informational facade about virtual particles; those virtual particles can’t exceed the speed of light, but changes in the informational envelopes in which the virtual particles are embedded are not subject to limitations on the speed of light.

Monday, September 24, 2007

Well Versed

I have been delving into David Deutsch’s work on Parallel universes. Go here, to get some insight into Deutsch’s recent papers on the subject. I’m not familiar enough with the formalism of quantum computation to be able to follow Deutsch’s papers without a lot more study. However, some salient points arise. In this paper dated April 2001 and entitled “The Structure of the Multiverse” Deutsch says:
“.. the Hilbert space structure of quantum states provides an infinity of ways of slicing up the multiverse into ‘universes’ each way corresponding to a choice of basis. This is reminiscent of the infinity of ways in which one can slice (‘foliate’) a spacetime into spacelike hypersurfaces in the general theory of relativity. Given such a foliation, the theory partitions physical quantities into those ‘within’ each of the hypersurfaces and those that relate hypersurfaces to each other. In this paper I shall sketch a somewhat analogous theory for a model of the multiverse”
That is, as far as I understand it, Deutsch is following the procedure I mentioned in my last blog entry – he envisages the relationships of the multiple universes similar to the way in which we envisage the relationships of past, present and future.

On the subject of non-locality Deutsch in
this paper on quantum entanglement, states:
“All information in quantum systems is, notwithstanding Bell’s theorem, localized. Measuring or otherwise interacting with a quantum system S has no effect on distant systems from which S is dynamically isolated, even if they are entangled with S. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that under some circumstances (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) quantum information is transmitted through ‘classical’ (i.e. decoherent) channels.”
Deutsch is attacking the non-local interpretation of certain quantum experiments. In
this paper David Wallace defends Deutsch and indicates the controversy surrounding Deutsch’s position and its dependence on the multiverse contention. In the abstract we read:

It is argued that Deutsch’s proof must be understood in the explicit context of the Everett interpretation, and that in this context it essentially succeeds. Some comments are made about the criticism of Deutsch’s proof by Barnum, Caves, Finkelstein, Fuchs and Schack; it is argued that the flaw they point out in the proof does not apply if the Everett interpretation is assumed. "

And Wallace goes on to say:

“…it is rather surprising how little attention his (Deutsch’s) work has received in the foundational community, though one reason may be that it is very unclear from his paper that the Everett interpretation is assumed from the start. If it is tacitly assumed that his work refers instead to some orthodox collapse theory, then it is easy to see that the proof is suspect… Their attack on Deutsch’s paper seems to have been influential in the community; however, it is at best questionable whether or not it is valid when Everettian assumptions are made explicit.”

The Everett interpretation equates to the multiverse view of quantum mechanics. Deutsch’s interpretation of QM is contentious. It seems that theorists are between a rock and a hard place: on the one hand is non-locality and absolute randomness and on the other is an extravagant ontology of a universe bifurcating everywhere and at all times. It is perhaps NOT surprising that Deutsch’s paper received little attention. Theoretical Physics is starting to give theorists that “stick in the gullet” feel and that’s even without mentioning String Theory!

Saturday, September 22, 2007

Quantum Physics: End of Story?

News has just reached me via that auspicious source of scientific information, Norwich’s Eastern Daily Press (20 September) of a mathematical break through in quantum physics at Oxford University. Described as “one of the most important developments in the history of science” my assessment of the report is that multiverse theory has been used to derive and/or explain quantum physics.

The are two things that have bugged scientists about Quantum Physics since it was developed in the first half of the twentieth century; firstly its indeterminism – it seemed to introduce an absolute randomness in physics that upset the classical mentality of many physicists including Einstein: “God doesn’t play dice with the universe”. The second problem which, in fact is related to this indeterminism, is that Quantum Theory suggests that when these apparently probabilistic events do occur distant parts of the universe hosting the envelop of probability for these events, must instantaneously cooperate by giving up their envelope. This apparent instantaneous communication between distant parts of the cosmos demanding faster than light signaling also worried Einstein and other physicists.

Multiverse theory holds out the promise of reestablishing a classical physical regime of local and deterministic physics, although at the cost of positing the rather exotic idea of universes parallel to our own. It achieves this reinstatement, I guess, by a device we are, in fact, all familiar with. If we select, isolate and examine a particular instant in time in our own world we effectively cut it of from its past (and future). Cut adrift from the past much about that instant fails to make sense and throws up two conundrums analogous to the quantum enigmas I mentioned above; Firstly there will be random patterns like the distribution of stars which just seem to be there, when in fact an historical understanding of star movement under gravity gives some insight into that distribution. Secondly, widely separated items, will seem inexplicably related – like for example two books that have identical content. By adding the time dimension to our arbitrary time slice the otherwise inexplicable starts to make sense. My guess is that by adding the extra dimensions of the multiverse a similar explanatory contextualisation has finally – and presumably tentatively - been achieved with the latest multiverse theory.

Not surprisingly the latest discovery looks as though it has come out of the David Deutsch stable. He has always been a great advocate of the multiverse. By eliminating absolute randomness and non-locality multiverse theory has the potential to close the system and tie up all the lose ends. Needless to say all this is likely to proceed against a background of ulterior motivations and may well be hotly contended, not least the contention that Deutsch has made the greatest discovery of all time!
Postscript:
1. The tying of all loose ends is only apparent; all finite human knowledge can only flower out of an irreducible kernel of fact.
2. Multiverse theory, unlike the Copenhagen interpretation of Quantum Mechanics, suggests that quantum envelopes do not collapse at all, but always remain available for interference. Hence it should in principle be possible to detect the difference between these two versions of Quantum Theory experimentally.

Thursday, August 30, 2007

Pilgrim's Project Progress on Path Strings

Here is the latest on my attempts to apply the notion of disorder to algorithms.

In 1987 I compiled a private paper on the subject of disorder. I defined randomness using its pattern frequency profile and related it to its overwhelming statistical weight, showing that the frequency profile of randomness has, not unsurprisingly, far more particular ways in which it can be contrived than any other kind of frequency profile. I never thought of this work as particularly groundbreaking but it really represents a personal endeavour to understand an important aspect of the world, and ever since this notion of randomness has been an anchor point in my thinking. For example, for a long while I have worked on a project that has reversed the modern treatment of randomness – rather than throw light on randomness using algorithmic information theory I have attempted to investigate algorithmics using this frequency profile notion of randomness.


I have recently made some progress on this project, a project that constitutes project number 2 on my project list. I’m still very much at the hand waving stage, but this is how things look at the moment:

Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.

Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.

Now, if we started with a result string of Lr zeros we could convert it into a random sequence by traversing the network with a path string of minimum length in the order of Lp, where clearly Lp ~ Lr. This is the shortest possible path length for creating a random sequence. However, to achieve this result so quickly would require the path string itself to be random. But what if for some reason we could only traverse very ordered paths? How long would an ordered path string be for it to produce a complex disordered result? This is where it gets interesting because it looks as though the more ordered the path string is, the longer it has to be to generate randomness. In fact very ordered paths have to be very long indeed, perhaps exponentially long. i.e. Lp >>> Lr for ordered paths and disordered results. This seems to be my first useful conclusion. I am currently trying to find a firmer connection between the path strings and algorithms themselves.

Sunday, July 15, 2007

Mathematical Politics: Part 10

The Irrational Faith in Emergence
Marx did at least get one thing right; he understood the capacity for a laissez faire society to exploit and for the poor to fall by the wayside, unthought of and neglected. In the laissez faire society individuals attempt to solve their immediate problems of wealth creation and distribution with little regard for the overall effect. Adam Smith’s conjecture was that as people serve themselves, optimal wealth creation and distribution will come out of the mathematical wash. But is this generally true? The laissez faire system is, after all, just that; a system, a system with no compassion and without heart. As each part in that systems looks to its own affairs no one is checking to see if some people are falling into poverty. And they do - in fact one expects it from complex systems theory itself. Like all systems the absolutely laissez faire economy is subject to chaotic fluctuations – stock market crashes, and swings in inflation and employment. These fluctuations are exacerbated by the constant perturbations of myriad factors, especially the moving goal posts of technological innovation. Moreover, the “rich get richer” effect is also another well-known effect one finds in complex systems - it is a consequence of some very general mathematics predicting an inequitable power law distribution of wealth.

In short, economies need governing; that is they need a governmental referee to look out for fouls, exploitation and the inequalities that laissez faire so easily generates. However this is not a call for half-baked notions of government advocated by Marxists. Marx may have got the diagnosis at least partly right, but his medicine was poison. Slogans urging the workers to take control of the means of production may have a pre-revolutionary “the grass in greener on the other side” appeal, but slogans aren’t sufficient to build complex democratic government. Marx hoped that somehow, after the overthrow of the owning classes, the details of implementing “worker control” would sort themselves out. This hope was based on the supposition that a post-revolutionary society would consist of one class only, the working class, and since it is the clash of class interests the are at the root of conflict, a post-revolutionary society would, as matter of course, truly serve worker interests. Ironically, the assumption that humans are capable of both rationally perceiving and serving their interests is at the heart of Marxist theory as much as it is at the heart of laissez faire capitalism.

In Marx the details of the post-revolutionary society are sparse. In my mixing with Marxists I would often hear of half-baked ideas about some initial post revolutionary government that would act as a forum served by representatives from local worker soviets. Because the post-revolutionary society is supposed to be a “one class society” it is concluded that there will be no conflicts of interest and therefore only a one party government will be needed to represent the interests of a single class. This government, the “dictatorship of the proletariat”, would hold all the cards of power: the media, the means of production, the police and the army. But with power concentrated in one governing party, conditions would be ripe for two classes. The bureaucracy of the one party command economy would, of course, become populated by a Marxist elite who would not only relate to the means of production in an entirely different way to the masses, but would also be strategically placed to abuse their power. Human nature, a subject about which Marx had so little say, would tempt the Marxist elite to exploit the potential power abuses of a one the party context. Marxists sometimes claim that there is no need for police and army in a one party paradise, but for the one-party governing class there is even more need for army and police in order to protect their exclusive hold on power. Moreover, the local soviets would provide the means of infiltrating and controlling the working classes via system of informers and intimidators who would no doubt masquerade as the representatives the working class.

It is ironic that both laissez faire capitalists and Marxists have faith in the power of a kind of “emergence” to work its magic. Both believe that once certain antecedent conditions are realized we are then on the road to a quasi-social paradisr. For the laissez faire capitalist the essential precursor is a free economy. For the Marxist the overthrow of the owning classes is the required precursor that once achieved will allow all else to fall into place. There is a parallel here with the school of artificial intelligence that believes consciousness is just a matter of getting the formal structures of cognition right: Once you do this, it is claimed, regardless of the medium on which those formal structures are reified, conscious cognition will just “emerge”. Get the precursors right and the rest will just happen, and you needn’t even think about it; the thing you are looking for will just ‘emerge’.

Rubbish.

I have had enough of this Mathematical Politics business for a bit, so I think I will leave it there for the moment.

Sunday, June 24, 2007

Mathematical Politics: Part 9

Games Theory Breaks DownMuch of the theory behind the 1980s free market revolution depended on the notion of human beings competently making ‘rational’ and ‘selfish’ decisions in favour of their own socio-economic well being. But there is rationality and rationality and there is selfishness and selfishness. It is clear that human beings are not just motivated by the desires guiding Adam Smith’s invisible hand. If the cognizance of politics fails on this point then those other motivations, some of them like the forbidden planet’s monsters from the id, will one day pounce from the shadows and surprise us. And the consequences can be very grave indeed.

The story of the Waco siege gives us insight into some of the more perverse perceptions and motives lurking in the human psyche, which if summoned have priority over the need to maximize one’s socio-economic status or even the need to self preserve. The perverse altruism of the Waco cult members was neither accounted for nor understood by the authorities dealing with the siege. No doubt the latter, with all good intensions, wanted to end the siege without bloodshed, but they appeared to be using the cold war model. They tried very hard to offer incentives to get the cult members to defect, using both the carrot (safety) and the stick (cutting off supplies and making life generally uncomfortable with psychological pressures). All to no avail. Even when the senses of the cult members were assaulted with tear gas and their lives threatened by fire they did not budge from the cult’s compound. The authorities reckoned without the cult’s perverse loyalty to David Koresh who by this time was claiming Son of God status, and was fathering ‘Children of God’ through the cult’s women. That’s nothing new either: Akhenaten, 3500 years before Koresh, thought and did the same, as have so many other deluded religious leaders. The irony of it! The socio-biologists have a field day on this point!

If anything the measures taken by the authorities stiffened the resolve of the cult members who saw in these attempts the very thing Koresh warned them of. Koresh had succeeded in reinventing, once again, that well known cognitive virus that ensures that its hosts interpret any attempt to get rid of it as a sure sign of the presence of Satan. Hence, all attempts to dislodge the virus have the very opposite effect and simply embed it more deeply in the psyche. Thus, like the efforts to wriggle free from a knotted tangle or the struggles of a prey to escape the backward pointing teeth of a predator, the bid from freedom by the most direct path only helps to consolidate the entrapment. The only way to untie a difficult knot is to slowly and patiently unpick it bit by bit.


Cold war games theory failed at Waco on at least two counts: Firstly the rationality of cult members was based on a false view of reality – they saw the authorities, a-priori, as the agent of Satan and Koresh as God’s savoir. Secondly the selfishness of the cult members was not that of looking after themselves. The greater selfishness was embodied in a loyalty to Koresh and his viral teaching. In the face of this the incentives and disincentives offered by the authorities were worse than useless. It never occurred to them that here were a group of people that were prepared to go for a “Darwin award”. (http://www.darwinawards.com/)

As our society faces other issues where religion looms large, such as Iraq and Islamic terrorism, we may find that Smith’s invisible hand provides no solution.
To be continued....

Wednesday, June 13, 2007

Mathematical Politics: Part 8

Complex Adaptive Systems
The Santa Fe institute is an affiliation of largely male academics seeking to the spread the theoretical net as widely as possible - especially into the domain of human society and complex systems in general. The theoretical holy grail is to write equations encompassing all that goes on under the sun and thus arrive at a preordained order which once captured means that theoreticians can retire declaring their work to be done. The universe is then a museum piece embodied in a few equations we can muse over, knowing that if we crank their handles they will churn out the answers. They will thereby encode all secrets and mysteries, thus making them no longer secrets and mysteries.


Thank God that’s not true. Even granted that physics contains catchall equations, those equations are very general and not specific. Moreover, our current physics, with its appeal to the absolute randomness of quantum fluctuation suggests that endless novelty is encoded into physical processes. As Sir John Polkinghorne points out in his book “Exploring Reality” the chaotic tumbling of the asteroid Hyperion is maintained by the underlying perturbations of quantum fluctuations. The motion of Hyperion is forever novel.
****
John Holland is and was a pioneer in the field of genetic algorithms. His work (amongst that of others) has revealed a close connection between learning systems and evolution. Both processes ring the changes and lock in successful dynamic structures when they find them. These structures select themselves because they have the adaptable qualities needed for self-maintenance in the face of the buffetings of a world in restless change. In evolution those structures are phenotypes adapted to their ecological niche; in learning systems they are algorithms encoding successful models of the world thereby allowing their host organism to anticipate aspects of that world.

In his book “Complexity”, Mitchell Waldrop tells of John Holland’s lecture to the Santa Fe institute. He describes how the theoreticians listening to Holland’s lecture were gob smacked as Holland delivered a home truth – there aren’t any final equations (apart, perhaps from some very general physical constraints) because reality is exploring the huge space of possibility and is therefore delivering endless novelty. That endless novelty can’t be captured in a specific way in some catchall theory. In fact there is really only one thing that can cope with it – learning systems like human beings – or as Holland calls them “complex adaptive systems”. These are systems that are themselves so complex that they have the potential to generate an endless novelty, a novelty that matches or perhaps even exceeds their surroundings. Thus, these systems are either capable of anticipating environmental novelty or else at least able learn from it when it crops up. There are no systems of equations capturing everything there is to know about complex adaptive systems or the environments they are matched to cope with. Therefore it follows that apart from God Himself there is only one system with a chance of understanding something as complex as human beings with all their chaotic foibles, and that is another human being.

To be continued.....

Thursday, May 24, 2007

Mathematical Politics: Part 7

Expecting the Unexpected.
The “super complexity” of human beings means that they are capable of throwing up unexpected “anomalies” and by “anomalies” I don’t mean phenomena that are somehow absolutely strange, but only something not covered by our theoretical constructions. Just when you think you have trapped human behavior in an equation, out pops something not accounted for. These anomalies strike unexpectedly and expose the limits of one’s analytical imagination. They can neither be treated statistically, because they are too few of them, or analytically because the underlying matrix from which they are sourced defies simple analytical treatment.

Take the example I have already given of the supermarket check out system. This system can, for most of the time, be treated successfully using a combination of statistics, queuing theory and the assumption that shoppers are “rational and selfish” enough will look after the load-balancing problem. But there is rationality and rationality. For example, if there is a very popular till operator who spreads useful local gossip or who is simply pleasant company one might find that this operator’s queue starts to lengthen unexpectedly. The simplistic notions of self-serving and ‘rationality’ breaks down. Clearly in such a situation they is a much more subtle rationality being served. What makes it so difficult to account for is that it taps into a social context that goes far beyond what is going on in the supermarket queue. To prevent these wild cards impairing the function of the check out system (such as disproportionately long queues causing blockages) the intervention of some kind of managerial control may, from time to time, be needed.

In short, laissez faire works for some of the people for some of the time, but not for all the people all of the time.

To be continued.....

Sunday, May 13, 2007

Mathematical Politics: Part 6

Mathematical IntractabilityRandomness is a complexity upper limit – size for size nothing can be more complex than a random distribution generated by, say, the tosses of a coin. A sufficiently large random distribution configurationally embeds everything there possibly could be. And yet in spite of this complexity, it is a paradox that at the statistical level randomness is very predictable: for example, the frequency of sixes thrown by a die during a thousand throws can be predicted with high probability. In this sense randomness is as predictable as those relatively simple highly organized physical systems like a pendulum or the orbit of a comet. But in between these two extremes of simplicity and complexity there is vast domain of patterning that is termed, perhaps rather inappropriately, “chaotic”. Chaotic patterns are both organised and complex. It is this realm that is not easy to mathematicise.

We know of general mathematical schemes that generate chaos (like for example the method of generating the Mandelbrot set), but given any particular chaotic pattern finding a simple generating system is far from easy. Chaotic configurations are too complex for us to easily read out directly from them any simple mathematical scheme that might underlie them. But at the same time chaotic configurations are not complex enough to exhaustively yield to statistical description.

The very simplicity of mathematical objects ensures that they are in relatively short supply. Human mathematics is necessarily a construction kit of relatively few symbolic parts, relations and operations, and therefore relative to the vast domain of possibility, there can’t be many ways of building mathematical constructions. Ergo, this limited world of simple mathematics has no chance of covering the whole domain of possibility. The only way mathematics can deal with the world of general chaos is to either simply store data about it in compressed format or to use algorithmic schemes with very long computation times. Thus it seems that out there, there is a vast domain of pattern and object that cannot be directly or easily treated using statistics or simple analytical mathematics.

And here is the rub. For not only do humans beings naturally inhabit this mathematically intractable world but their behavior is capable of spanning the whole spectrum of complexity – from relatively simple periodic behaviour like worker-a-day routines, to random behaviour that allows operational theorists to make statistical predictions about traffic flow, through all the possibilities in between. This is Super Complexity. When you think you have mathematicised human behaviour it will come up with some anomaly....

To be continued.....

Thursday, May 03, 2007

Mathematical Politics: Part 5

The Big But
Complex system theory, when applied to human beings, can be very successful. It is an interesting fact that many measurable human phenomena, like the size of companies, wealth, Internet links, fame, size of social networks, the scale of wars, etc are distributed according to relatively simple mathematical laws - laws that are qualitatively expressed in quips like “the big get bigger” and “the rich get richer”. It is an interesting fact that the law governing the distribution of, say, the size of social networks has a similar form to the distribution of the size of craters on the moon. It is difficult to credit this given that the objects creating social networks (namely human beings) are far more complex than the simple elements and compounds that have coagulated to produce the meteors that have struck the moon. On the other hand, there is an upper limit to complexity: complexity can not get any more complex than randomness and so once a process like meteor formation is complex enough to generate randomness, human behavior in all its sophistication cannot then exceed this mathematical upper limit.


The first episode of the “The Trap”, screened on BBC2 on 11th March 2007, described the application of games theory to the cold war (a special case of complex system theory). The program took a generally sceptical view (rightly in my opinion) of the rather simplistic notions of human nature employed as the ground assumptions in order that games theory and the like are applicable to humanity. To support this contention the broadcast interviewed John Nash (he of “Beautiful Mind” and “Nash Equilibrium” fame – pictured) who admitted that his contributions to games theory were developed in the heat of a paranoid view of human beings (perhaps influenced by his paranoid schizophrenia). He also affirmed that in his view human beings are more complex than the self-serving conniving agents assumed by these theories.

Like all applications of mathematical theory to real world situations there are assumptions that have to be made to connect that world to the mathematical models. Alas, human behaviour does, from time to time, transcend these models and so in one sense it seems that human beings are more complex than complex. But how can this be?

To be continued....