Friday, August 01, 2008

GET DAWKINS


Everyone with a theistic worldview is Dawkins bashing nowadays. I’m not a natural atheist basher myself: Given the human predicament the atheist position is at least plausible and deserves some respect and consideration, especially as much of the religious world is not only crackpot but is so horribly blighted by a mindless bullying hegemony. However, although I’m not a natural enemy of atheists, the impassioned and polarised state of the debate, especially in America, forces one to choose sides. Protagonists like Richard Dawkins seem determined to make enemies of all who don’t subscribe to their take on the subject of Primary Ontology, which to be fair is a highly speculative topic that really demands a tentative and trial approach rather than a violent melee of “we don’t take prisoners” crusaders. Ok then Richard, have it your way; you’ve got a thoroughly alienated enemy here. Happy now? To this end I reproduce below an otherwise private article I wrote in 1993 that was a response to Richard Dawkins’ article in the New Statesmen in 1992. If you can’t beat the Dawkins bashers join them! Makes a change from fundamentalist bashing I suppose!

HOW TO KNOW YOU KNOW YOU KNOW IT

Knotes on Richard Dawkin’s article “Is God a Computer Virus”, New Statesmen Dec 1992

by Tim Reeves 18/9/93

Revised January 1997

1) RELIGIOUS RAMBOS exercising their believing muscles, nervous wimps in religion, wild red blooded Catholics, and virtuoso believers skilled in the arts of believing the unbelievable, are all some of the characters one meets in Richard Dawkin's article "Is God a Computer Virus” (New Statesman, Dec. 92). I found the subject matter of the article laughably caricatured and I wasn't sure how seriously the proposition was to be taken. But having heard and read Dawkins in the past I think he is serious, although he takes far from seriously those who are the subject of his thesis. His ideas do have a bearing in some quarters, especially the cults, but for myself I found it difficult to identify the faith of some of the people I meet, or my own faith, with what Dawkins describes. However, if Dawkins is right, then it is unlikely that I could make such an identification, because it is no doubt part of the survival strategy of the "faith virus" to be difficult to detect by victims. It is, therefore, difficult for a person of faith to oppose the faith virus theory by attempting prove that they are not a victim, because the theory probably implies that the virus is likely to induce its victim to try and do this any way in order to enhance its survivability. Thus my opposition, as a person of faith, will hardly count as evidence. In this respect Dawkins faith virus theory is remarkably like the faith virus itself in that one can say of both, to quote Dawkins, "Once the proposition is believed it automatically undermines opposition to itself"!! The faith virus theory is also self-referential like the faith virus. But I am not going to be too hard on Dawkins here; Self reference, particularly of the self supporting stable kind, as I will go on to show, is not necessarily a bad thing. But let us note the irony in Dawkins thesis !

2) THE PENULTIMATE PARAGRAPH of the article is really the most interesting bit. Here, after considering and lampooning (harpooning?) those wallowing in the sea of faith, this solid, no nonsense, bah-humbug biologist attempts to put his intellectual anchors down on what he thinks is the firm bedrock of science. As we know, this bedrock, in a philosophical sense, is far from firm. Who is going to tell him ?

3) AS DAWKINS SMUGLY throws out an anchor in the penultimate paragraph it seems to me that his anchor simply catches on to the very raft on which he is standing. His justification of science in terms of exacting selective scrutiny of concepts, non-capricious ideas, evidential support, repeatability, progressiveness, independence of cultural milieu etc., itself appears to be a scientific view of science, presumably based on some juxtaposition of a theoretical conception of science and observations of it. Now, I don't want to be misunderstood here; some might think that this scientific self referencing is sufficient to rubbish Dawkins, but for myself, not only do I tend to agree with his views on science (although they need further scrutiny), but I find no a priori problem with self-description, self-reference, or self-justification. For the moment, however, let us be aware that it is happening, covertly, in this penultimate paragraph, where Dawkins anchors science to science, and let us, once again, make a note of the irony.

4) RUSSELL'S PARADOX, which was a famous contradiction arising from self-referencing or self-descriptive statements, was solved by doing not much more than simply disallowing such self-referencing. However, this solution, although valid, was highly artificial. Self-description and self-reference can and do exist, but if we allow them to exist in our knowledge there is a price: Self-description and self-reference are forms of feedback, and therefore if we accept them we also have to accept that in analogy with systems where feedback exists, we will have the possibility of both stable and unstable conceptual behaviour. The stablility of non-contradictory thinking contrasts with the unstable world of contradiction where we find a cognitive analogy to the oscillatory or chaotic behaviour produced by certain types of feedback. The liability of unstable cognition is a consequence of the human ability to have knowledge about knowledge, and it becomes a possibility as soon as we allow even subtle updates in what we know about our knowledge to register themselves as part and parcel with that knowledge. Of course, in mathematics, one can attempt to rule out this conceptual feedback, but in the real world of cognition it exists, and Russell's solution, although fine for set theory, is only an artificial device.

5) I HAVE A THEORY that at least part of the reason for the demise of 18th century rationalism was the rather unnerving effects of conceptual feedback. With the early success of science and the consequent feeling that the race was onto a good thing, it is not surprising that eventually a rational view of rationalism would be attempted. Thus, as in various philosophical endeavours understanding sought to understand itself, what was to prove a very dangerous loop was quietly closed, and a trap was set for any who might leave the straight and narrow. A little knowledge was to prove a dangerous thing, and so in a series of philosophical debacles the painful signals of conceptual feedback started to flow like blood in a previously constricted limb. Extreme empiricism in the form of positivism discovered unstable feedback as soon as people started asking whether the verifiability principle was verifiable. Kant, in his search for a-priori synthetic knowledge about the world, failed to get to the other side of the cognitive interface we have with it, and was left dumbfounded by the result that mind appeared to be justified by mind, and he virtually lost contact with the "external" world. Darwin sensed the possibility of unstable feedback as he mused over how an evolutionary system, which appeared to be governed only by a survivalist ethic, had any obligation to produce minds that could understand evolution. The ulterior motives that sometimes lurk behind reflections such as these can be highly self-destructive. On many issues (but certainly not all) high standards of empirical verification and/or testing are possible, and this is capable of supporting a healthy level of scepticism because providence allows its satiation with sufficient empirical demonstration. However, this scepticism has a tendency to become more extreme, thoroughgoing and demanding, perhaps as a result of a proud desire for intellectual self sufficiency founded on "absolute" knowledge. And so, as if in a judgment on the abuse of providence, this scepticism is permitted to start to doubt and therefore undermine the a-priori methods, assumptions, and mental toolkit that providence supplies in order for scepticism to go about satiating itself in the first place. Thus, without taking the utility of these gifts of providence for granted, human scepticism remains deeply unsatisfied. It is as if the stomach, in it’s craving for food, was to start digesting itself.

6) THE PHYSICAL SCIENCES are served by a meritocratic elite, holders of strange and deep secrets who express themselves in obscure technical and mathematical language that not many can fathom. This knowledge, some may say, and many appear to accept, is the key to the mystery of life, the universe, and everything; every academic subject is just a footnote to physics; the physicist Richard Feynmann called the social disciplines pseudo science, and the theoretical physicist Stephen Hawkin writes the wave function for the universe! Whether right or wrong, the attitudes underlying this sort of thing, are asking for trouble; some of the gut feelings at the bottom of both religious and secular humanism are offended here.

At the heart of humanistic endeavours there seems to be a necessity to have a basic a-priori optimism about the possibilities open to human achievement and its ability to eventually attain peace, justice and fairness. If this, what sometimes appears to be crass optimism, didn't exist it is unlikely that humanistic projects, like Marxism (and Fascism), for example, would ever get off the ground. Hence, it may be felt that a situation where an intellectual elite hold the keys of all knowledge just can't be right; it cuts across humanistic optimism; it isn't fair, it isn't accountable, it isn't egalitarian; surely the universe is amenable to a more socialist approach? Worst of all, I suspect, is that it also cuts across some people’s own taste for intellectual hegemony. So, it is with great glee that some of these people pounce on the discomfiture of science found in the great feedback debacles, where the physical sciences appeared to lose something of their absoluteness. Moreover, Kant had showed that the human element must be highly active in the pursuit of knowledge. So, in the battle for academic and intellectual hegemony there have been those who, jealous of the position and achievements of the physical sciences, have so emphasised the human element in the quest for knowledge that one can be forgiven for thinking that they are suggesting this element is all there is to it, and that human and cultural studies are therefore more fundamental. Some of the more extreme disciples of social historicism, objective idealism, dialectical materialism, existentialism, and subjective idealism appear to be consumed by ‘science envy’. There may be something to be said for all these points of view, but when they become fortresses in the battle for intellectual domination, the trappings of offence and defence do not help toward an impartial consideration of them. It is as if opthalmists were to claim that the meaning of life, the universe and everything is to be found by studying the eye. After all, a case could be made for this in as far as much of life revolves round sight! The motives behind the intellectual anti-science culture are not only clear, but it is also clear that it is a culture that has potentially worse conceptual feedback problems than the physical sciences. The latter may claim, (although it can never assuage absolute scepticism) that the world of physical laws is unchanged by thought and culture, and that science therefore has the effect of anchoring knowledge. But if, in contrast, knowledge is only part and function of culture, then as it seeks to know that culture (of which it is a part) it will in turn change that culture, thus in turn altering itself. Therefore, it will find itself following a moving target leading, perhaps, to a runaway feedback situation resembling a dog chasing it's own tail. Perhaps a run around of this kind is precisely what is happening in our society!

7) CONCEPTUAL FEEDBACK is inevitable, but an important question is, is the feedback we are interested in stable? If it was not possible to attain stable knowledge we could not know about this instability ourselves in a stable way because if we did, then it would, of course, contradict this condition of universally unstable knowledge. However, if some truly stable knowledge existed, then a stable belief in stability could be part of that stability, and therefore, a-priori stable conditions admit the possibility that we could know of this stability in a stable way. (Got that?) At least a modicum of a-priori stability is required for stable knowledge; we would not be the creators of this stability - it would just have to be accepted and exploited, as probably happens in the physical sciences. Moreover, the physical sciences seems to be blessed with a high proportion of solid and reliable types who tend not to endlessly analyse their assumptions, thus closing the feedback loop, but instead are inclined to go ahead and exploit their "hard", "firm", "soft", and sometimes thoroughly "wet" brainware to the full. No wonder the social and human disciplines find progress more difficult! If some of the students of these disciplines concentrated less on making a style out of defining and redefining themselves, of constantly being insecure about making assumptions, and of tampering with a mental toolkit they don't understand, along with various other behavioural affectations, then they might find progress easier! I therefore have great sympathy with Dawkin's implicitly self-referencing, but stable, characterisation of science. However, one may ask what, apart from gut reaction, makes Dawkins object to some of the equally stable "faith viruses" he describes?

8) THE ARTICLE FAILS to elucidate the reason for this gut reaction because it gives little space to the question of just what characteristic makes a virus a virus. If all that characterises a virus is that it is a resilient self-perpetuating packet of information then I suppose one might argue that knowledge of how to open a door is a virus. That latter is a concept that spreads from human to human and even cats and dogs have been observed to catch this very successful cognitive virus. In this sense any useful piece of information becomes a virus. But what marks out a useful piece of information from a virus is the former’s role in relation to a larger context; what distinguishes the door opening concept from a virus is that the former takes part in a wide symbiosis: Without the door opening concept life as a whole would become very difficult, whereas without a virus life is not difficult; on the contrary life's identity and stability is usually enhanced by the absence of a virus, whereas the absence of useful information not only diminishes life’s identity and stability, but may even make life impossible because life is dependent on useful information. In contrast, life is not necessarily dependent on a virus, although the virus is inevitably parasitic upon life and therefore necessarily dependent on it. In a strange inversion of transactive justice the virus may even exact a cost on the host for the privilege of helping to maintain the viral identity; namely chaos in the host. In a word the virus gives nothing and takes all just short of the final extinction of the host. So, is the God concept of this ilk? I would say no; it is part of a wider symbiosis, and a lot more than that!

9) THE NARROW CONFINES of extreme forms of reductionist materialism, dialectical materialism, existentialism, relativism and subjective idealism may be dogmatic about what can be, but for myself I found that I could never claim to know enough to be able to say "there is no God". For all I knew the notion of God could be both intelligible and real. Take the issue of intelligibility: Is the concept of God so diffuse that it is meaningless? Dogmatic "intelligibility atheism", like "ontological atheism" founders on the inevitable finiteness of experience and knowledge; my concepts of complex things such as "personality", "human beings", social interactions and even complex computer behaviour may also be rather diffuse. But clarity comes with experience and learning, and, moreover, there seems to be a rough rule that the less trivial and more significant something gets the less amenable it is to the immediate senses and lower cognitive functions. If God existed, then like many social entities, there was going to be a problem for me in grasping both the meaning and reality of God. No "proof of God" was ever likely to be found that was big and complex enough and I could no more expect to "see" God than I could expect to "see", other than metaphorically, a personality or a society. What should one do when faced with these uncertainties? Should one commit oneself to atheism, God, or nothing at all? To me there seemed to be no middle way. Like a man in an aircraft going down in flames, I had only two options: stay with the aircraft or bale out. But I had an advantage over the man in the burning aircraft: He could make a wrong decision; the plane may or may not crash badly or he may or may not muff the parachute drop. Christian living appeared to do good things to people, things that a thorough going secular philosophy failed to do. So even if this God business was rubbish I had little to lose by giving it a try. Born out of uncertainty and the need to act was the realisation that, as Pascal noted, opting for God was the better half of the bet. I couldn't go wrong. So I made my choice and it turned out to be the best thing I ever did! And so it should have been; absence of proof or disproof proves nothing; if God was logically meaningful and ontologically real, and moreover personal and relevant to my existence, then positive evidence was obliged to come along eventually. It has been said that assertions of existence are scientifically intractable because one has to look all over the place to prove or disprove them; however, the logic changes a little if that which is asserted to exist comes looking for you!

10) LACK OF BALANCE is how I would describe Dawkin’s article in New Statesman. In trying to maintain a balance myself, I would acknowledge that there might be circumstances to which Dawkin’s ideas apply, but there are also religious connections to which they do not. For myself, and some other nervous wimps in religion, Dawkins ideas are inapplicable because experience, evidence, reason and philosophy play an important role in nuturing and maintaining faith. For example, the inevitable level of giveness in the universe, the consciousness discontinuity, historical evidence re the life of Christ and His resurrection, personal experiences and the synchronous nature of certain events in one's life, all act in the germination and maintenance of faith. Of course, in comparison with the kind of secular intelligentsia that Dawkins represents one might appear as a mistaken fool about all this, but, nevertheless, we are talking here of something far removed from the sort of "faith" described by the article. In the article we find reference to a kind of fideist and gnosto-dualist faith that is self supporting in the sense that it loves itself more and more as it is less and less contaminated by the profanity of evidential or reasoned support of any kind.

Much more could be made of the "compost" of experience, and evidence that help nourish the seeds of faith during growth, but I would like to pick up something which is more in line with the theme of self-reference.

11) DAWKINS IS RIGHT, in a sense, that a faith like the Christian faith does have a considerable conceptual stability arising out of its self-referential nature. However, this self referential stability exists not in the way described in the article, but as a result of the Christian belief in a loving personal God. To see this, contrast it with the opposite view, namely, that the world, apart from oneself, is primarily and fundamentally apersonal and/or disinterested. Influenced by a belief (for such it is) of this sort, one may rightly question and feel sceptical about whether one's knowledge represents anything at all; apersonal parties and/or principles, by definition, carry with them no a-priori absolute guarantee of the representational nature of any knowledge. The belief that the universe is primarily disinterested and/or apersonal is not only inclined to violate the foundation of knowledge, but it may start to undermine itself as one may wonder whether this belief itself actually represents anything. The result is that you are either left with nothing or next to nothing, or less honestly you fudge the issue by becoming philosophically diffuse and won't dare admit to holding to anything resembling a belief in truth. At most there might be an acknowledgement of the utilitarian value of beliefs (presumably itself a belief). This tendency toward nihilism, or at best "minisculism", instability and confusion contrasts with the stability of a belief in a personal loving God. If I hold such a belief I am more likely to see accurate representational knowledge as an outcome of God's love for me. Needless to say, the very belief in a loving God is itself seen as the providential outcome of that love because it is believed to be imparted by a God who desires to reveal to us not only truths, but above all Himself. This belief is self-referencing, but it is, of course, highly stable because it is self-affirming. Moreover, it encourages a proactive growth in knowledge, as this growth no longer seems a pointless exercise; in this context knowledge is believed to mean something. Therefore, if there is a God, the belief in God will further reinforce itself as further knowledge and experience of God's love and providence is sought for, and inevitably gained. This growth and reinforcement will depend on that knowledge and experience being of the corroborating kind; but for it to start at all one must first start with God. My view is that to do so is to concur with an essential component of one's metal toolkit. There is no way one can from an absolutely sceptical basis "prove" this starting point without getting into unstable conceptual feedback cycles; we just have to assume it and exploit it. We are absolute dependents whose first premise is "In the beginning God ...."


c. T.V Reeves 1993

Sunday, July 13, 2008

Celebrity Death Match: Dembski vs. Bentley

The Swot
VS.
The Hulk


Here’s an unusual and novel juxtaposition of protagonists: see here and here. William Dembski, Intelligent Design guru, does a Todd Bentley meeting! It vaguely reminds of the spate of postmodern films that bring incommensurable super heroes into collision: e.g Miss Marple vs. The Terminator or something like that.

I didn’t know whether to put this post on “Quantum Non-linearity” or on “Views News and Pews”, so I’ve posted it on both. It is clear, as the second link reveals, that Dembski was left with a very unfavourable impression of Bentley (I dread to think that it could have been otherwise). It’s interesting to see that Dembski made the very same observation that I made when I went to a Benny Hinn meeting: “…the exodus from the arena of people bound in wheelchairs was poignant.” But I hasten to add that I did not, like Dembski, travel many miles to get to my meeting: in order to save rental costs the financially savy Hinn organization, conveniently for me, decided to bring their show to Norwich football ground which is less than 20 minutes walk from my house. I certainly did not drag my family along; instead I went, as usual, in the capacity of an amateur researcher with camera and notepaper.

One of the commentators on Uncommon Descent takes Demsbki to task for even giving Bentley’s ‘healing miracles’ the slightest of credence from the outset. Fair comment except that Demsbki has a severely autistic son, and so he was understandably vulnerable to the ‘clutching at straws’ effect. Typically of this kind of Christian scene there is an exploitation of the emotions associated with the unknown, especially fear of the unknown. It’s all to easy to follow a false trail for something you really want: you hope against hope that the next corner or the next horizon will reveal the vista you are longing for. It never comes, but whilst you are in a state of ignorance the sheer hope strings you along. And when the carrot of hope fails to lead you up the garden path, there is the stick of fear, fear that an inscrutable and unknown god might just be revealing himself in the utterly unpalatable and who knows what displeasure he will visit upon those who do not swallow it. It’s all a very Pagan view of God: it is a ministry that trades on fear, ignorance, numinous dread, submission, and above all on the notion of an unaccountable angry god whose actions are to all intents and purposes arbitrary. Pagan practices down the ages have thrived on this: I'm reminded of a burial site next to the Cursus in Dorset where a Neolithic woman and children were found buried by archeologists who had a sneaky suspicion that they were uncovering a tragic story of human sacrifice: what satanic things humans can screw themselves up to do when they believe they are sanctioned by the Divine.

Is there a connection here with Intelligent Design theory? I hope not. I respect the efforts and faith of Dembski and his many followers who are carrying out a valuable critique of evolution and are presenting worthy challenges to people who think they believe in evolution. However, one of my niggles with ID theory is that introduces an arbitrariness. In ID theory “Intelligence” is used as a kind of wild card or black box notion: “Intelligence is as intelligence does”. The broad sweep of paleontological change, which at least presents a prima facie case for evolution, then fails to cohere; it is like a story that seems to be a story but of which we are told is no story after all. I have yet to find out how ID theory accounts for paleontological history and so far ID seems to be a theory of negation, a case of showing that evolution isn’t possible. After that it’s the old the shrug of the shoulders, the appeal to the inscrutable and the wild card, and sometimes there is the dark hint that anyone asking deeper questions are rushing in where angels fear to tread.

Thursday, July 03, 2008

Another Project to Dabble In

I seem to have plenty of science projects to dabble in, and this is helping to pleasantly ‘kill time’ on this side of eternity. So I thought I would have a bit of a sabbatical from the study of the evolution/ID debate with its ‘red in tooth and claw’ feel and went back to my old word association program ‘thinknet’. This project, which has its roots in some code that goes back to the nineties and ideas that go even further back, links words or rather ‘ideas’ into a network of associations. The links are weighted with an appropriate probability and these probabilities emerge as result of the statistical properties of dictionaries of links. (That’s the theory at least). In this model a link is naturally a two-way thing: a link from concept 1 to concept 2 implies a link from concept 2 to concept 1, although in the most general case the probability weighting of the pair of links will be different. (a concept may be linked to any number of other concepts) This picture seems to have a fairly fundamental mathematical basis in the distribution of properties over items. However, it is natural to wonder just what isomorphisms this model has with the neural networks in the brain. A single neuron is most likely to have a large number of incoming links connected to the dendrites of the neuron but with only one outgoing link via the axon. What’s that mean?
.
(I'll will, of course, continue to with the evolution/ID studies in due course and thanks to all those who make this study such an exciting project)

Monday, June 30, 2008

Terminal Moran

It’s all too typical to see Larry Moran getting uptight in this blog entry about the Darwin symposium held at the Royal Ontario Museum. Apparently an inordinate amount of time was given to the adaptionist view of evolution and other mechanisms of evolution were all but ignored. He called it an adaptionist lovefest. Now, I have favoured Prof Moran’s ‘pluralist’ view on evolution from day one - in particular the notion that ‘genetic’ drift is a factor in evolutionary change. Genetic drift reminds me of the tendency of a language to drift over time – random changes in the language move it along a line of possibility where functionality is neither impaired nor enhanced and this constitutes a ‘neutral’ change. The abstract construction I have in my mind is of morphospace being laced with contours of structural stability and a kind of diffusion motion exploring these morphological ‘isobars’ in a cosmos that is in morphological disequilibrium. The conjectured fibril/spongey structure of functionality in morphospace may have a diversity of regions displaying equilbria, instabilities and non-linearities, and these might provide a way to explain the vagaries, surprises and starts and stops of evolutionary change.

Larry gets uptight about a non-pluralist view because an exclusively adaptionist view weakens the explanatory power of evolutionary theory. And Larry needs a strong theory of evolution because it defers or even obscures the deep questions over ultimate ontology that are waiting in the wings and which might threaten a vanilla atheism. Anyone like myself who so much as conjectures about the outer theoretical context framing our cosmos and the possibility of a God like object is likely to be looked at askance, if not accused of superstition.

On the other hand perhaps the emotional investment here is needed, needed to propel the investigation down a particular avenue of research. People like myself who are under motivated about whether or not evolution is a ‘fact’ perhaps will lack the focus on a particular theoretical outlook. What better person to the drive evolutionary option than someone who has backed themselves into an ontological corner because their world view and everything they stand for is threatened if evolution doesn’t stand up?

But by the same token we need those of an opposite persuasion. The theoretical notion of a morphospace with connected domains of functionality requires adversaries to generate contra evidence. Hence I have turned to the ID people on Uncommon Descent. They are delighted with a question like ‘Is Darwin past its sell by date’, a question, which was bandied about at the symposium much to Larry’s annoyance. Such phrases hint that evolutionary theory is not as robust as Larry, with his atheism or bust weltanschauung, would like. For me the input from ID theorists is not only respected but is vitally important to test this vague notion of a connected morphospace probed by the requisite motions. Given my state of knowledge there is the possibility that the ID contention about the unevolvability of biological structures is right and therefore it needs to be investigated. However, to so much as take their efforts seriously will upset people like Larry; he - necessarily in order to protect his world view – regards them all as ‘IDiots’, because that excuses him from giving them countenance! Trouble, is some of the ID people have also backed themselves into a corner and like Larry are effectively partisan and crusading converts to their cause. They are all liable to shoot first and asked questions afterwards, or even worse not bother with the questions at all. If you go in make sure you’ve got a skin as thick as a rhinoceros. Both are at the end of the line with no way out.

Saturday, June 07, 2008

Network Norwich Physics Debate

Network Norwich columnist James Knight has a probing article on the subject of the physical laws here. In the subsequent comment thread he also raises some probing questions on the nature of probability. I recently spent a bit of time replying to James queries and this can be seen on the thread attached to his article. There are also some useful comments on evolution in this thread.

Thursday, May 08, 2008

Intelligent Debate

Whilst the world eagerly awaits my contribution to the evolution/Intelligent Design debate (I am currently studying some papers by ID experts) people might like to follow the scholarly discussion as it rages elsewhere. Professor William Dembski, dubbed as the ID community's “Big shot” intellectual, has decided to broaden this interdisciplinary field even further and in this post he considers what effect economic realities might have on the debate. Atheist Professor Larry Moran replies in his blog here. One of Moran’s commentators uses some technical jargon that I don’t understand when he describes Dembski’s contribution as a load of f*ck*ng sh*t.

In Dembski’s consideration of how social position might impinge on the debate I am not sure that I understand his concept of class: I translate the triplet ‘upper class/middle class/working class’ as: ‘aristocratic/property owner/wage earner’. I think that Dembski understands it in a more American way: that is, terms of one’s financial place in society rather than in terms of one’s relation to the means of production as per Marx.

Why didn’t I take up fishing as a hobby? I like it really...

Thursday, April 24, 2008

In No Man's land

I have been studying the following ID Internet material: This, this, and this. In due course I intend to reply here.

If I were to keep within the terms of reference of the latter link then I would confine my efforts purely to question of whether or not the Second Law of thermodynamics contradicts evolution. However, like their opponents in the atheist corner the ID community has a tendency to burn their boats and have little appetite for concessions; They have committed themselves to the view that the Second Law is inconsistent with evolution. The no holds barred battle that is raging between the ID community and the evolutionists is making life difficult – it’s like going bird watching in a world war I trench zone.

Wednesday, April 02, 2008

Uncommon Descent and Common Decency

I have been deeply in discussion with the ID theorists on Uncommon Descent. See here, here, here, and here. I have endeavored to keep things as cordial and decent as possible, although a commentator who calls himself KairosFocus on the last link proved to be a bit touchy at first. He has done a lot of work on thermodynamics issues and has some worthy points to grapple with; a subject on which I breezed in confidently and presumed to pronounce. Moreover, given that like other ID theorists his name has been kicked around, this is understandable.

What I do find difficult to handle is the lack of detachment one finds on both sides of the creation/evolution debate, with emotions running high and insults filling the web pages. The reasons for this I think are clear: The atheists have strongly identified themselves with evolution because uncomfortable and tricky questions about the primary ontology of the cosmos (which so easily lead on to God) can then be at least deferred if not cleared off the table altogether. The ID theists on the other hand, with their concept of the direct intervention of what they evasively call ‘intelligence’, are being both affectively non-committal and in-yer-face at the same to time (and of course everyone knows what they are really talking about). They have effectively dragged God off the philosophical back shelf into the foreground and spotlight of scientific polemic, thus challenging many atheists in their own preferred field: and they don’t like it! Sometimes the ID theorists look like chickens running around with targets on their backs! You’ve got to hand it to them; they are courageous!

I don’t particularly want to make enemies of reasonable, fair and intelligent proponents on either side of the debate, but I suppose that’s just too much to ask. Both sides are looking for someone to shoot at and if you have an intermediate position like mine you have to do some pretty nifty diplomatic footwork to miss the bullets from either side.

As a theist I don’t find myself loosing sleep or getting uptight about whatever side of the reasoned argument is prevailing. With no particular vested interest it makes me a pretty cool customer. I personally regard that as a good thing: emotions can cloud one’s judgment and self-awareness. And yet on the other hand, like fuel, it is emotion that keeps one flying high.

Sunday, February 17, 2008

Chocks Away!

Right, after several months on atheist Larry Moran’s blog getting up to speed on evolution I’ve at last decided to invade the “opposition’s” airspace! See this link, and especially this link, for results.

I’m not really an antagonist; some argue for either evolution or ID as if their life depends on it. In my case I’m not hunting down enemies; I’m hunting down answers – if there are any to be found! But nevertheless, the ironies are profound.

Wednesday, February 06, 2008

The Black Swan

I’ve recently finished reading this book by Nassim Nicholas Taleb. It left a favourable impression with me and so I’m glad to see that it has become a best seller. The book exposes the conceits and false confidences in our ability to understand and/or predict that may accompany what are effectively toy town theoretical renderings of complex realities. One commentator suggested that Taleb’s book is repetitive, but given how easily human beings fall into the trap of what Taleb calls ‘epistemic arrogance’, the lesson needs to repeated over and again.

There is far more right with this book than there is wrong with it, but if I were to engage it critically I would want to delve a bit deeper into the following matters:

The Gaussian Bell curve: This curve is a product of the random sequence, the latter being a limiting form of complexity and therefore of fundamental interest. However, I think that Taleb is nevertheless right here: perfect randomness (and therefore the gaussian bell curve) has a limited application in our world. Our world exists somewhere in that vast trackless domain between high order and maximum disorder.

Fractal mathematics and the scalable power law treatment of ‘negative black swans’, like disasters: Fractal scalability only seems to exist in relatively small logarithmic ranges. Taleb is of course is aware of this, but suggests that the limits of scalability are academic if we live inside the scalable zone.

Retrospective explanatory narratives: Taleb pours scorn on those who rationalize things after the event. My own feeling is that the ‘NP’ like structure of some domains of study allows outcomes, like riddles, to be correctly rationalized only after the event. But on the whole I suspect Taleb is right again: retrospective analysis, like the Gaussian bell curve, sometimes works; often, however, it may simply amount to constellation spotting.

Theoretical Narrative: There is nothing wrong in theorizing per se: it’s just that this activity must proceed with caution and humility, always self-aware on an acutely self-critical meta-science level: how and why do we know we know?

The Cosmos is not sufficiently ordered for it to be universally amenable to elementary mathematical models and yet it is not disordered enough for comprehensive statistical treatment. Mathematical probability might seem to be a way of dealing with uncertainty, but it only works if the possible cases can be enumerated. Unlike games of chance our world is open ended with a space of possible cases of staggering dimension and unknown limit. Therefore formal probability cannot be used to quantify the unknown or provide a handle on those surprising ‘one-off’ events that hit us again and again. To attempt to do so is what Taleb calls the ‘Ludic fallacy’

Taleb is brash and abrasive about ‘empty suit’ forecasters and the ‘nerd knowledge’ of academia. This will undoubtedly upset those people who know they are being targeted. It was easy for me to laugh heartily at Taleb's amusing and quite extreme jibes and anecdotes, because I'm not the target. But those whose ivory tower positions have given them such a breath taking complacency that they think it is in order to bully and insult us into their beliefs have brought Taleb the Tornado on themselves. On balance I would rather we had people around like Taleb to challenge an endemic epistemic attitude problem than live in comfortable denial. If we take Taleb’s lesson to heart we will be better emotionally adjusted for the next big shock.

Monday, January 28, 2008

The Intelligent Design Contention: Part4

Casey Luskin
In this web article Casey Luskin, Intelligent Design apologist, responds with a comprehensive rebuttal of America’s National Academy of Science’s negative assessment of ID. The article exploits the self-criticizing quotes of evolutionary theorists who candidly admit weaknesses and gaps in the theory in relation to a variety of outstanding problems: the inconsistencies in trying to build phylogenetic trees, the poor state of abiogenesis, problems in relating evolution to the fossil record, and the patchy evidence of hominid evolution. For me these problems are not unexpected given the nature of beast – evolution is one of those big historical theories dealing with a complex ontological category. My own gut feeling is that evolutionary theory, given what it is trying achieve, does a good job of achieving it – that of linking a diversity of observations into one theoretical framework. It will, of course, never be so sure footed as say celestial mechanics, or atomic theory, simply because it deals with such epistemologically intractable objects as deep time and, it hardly needs be said, the most complex objects we know of – living things. It is unfair, as does Luskin in his article, to make comparison with the much simpler objects that physics studies. Evolution is certainly not less well founded than some of the other epistemologically tricky things I believe in like, for example, historical narratives, middle of the road socialism, the constitutional monarchy, or theism. I take evolution and evolutionists seriously. It’s one thing to criticize evolution, but it’s quite another to attempt to advance an alternative theory that is as successful.

But I also take Luskin and his fellow ID proponents seriously and especially that key concept of ID, irreducible complexity. In this series of blogs I will be using my blog as a kind of sounding board to help develop my abstract ideas in relation to this key ID notion. In ID theory, if theory it is, the whole edifice of intelligent design stands or falls by the notion of irreducible complexity. Hence it is this concept that I want to focus my efforts on here.

In his article Luskin addresses the NAS treatment of irreducible complexity. He answers the sort of criticism of irreducible complexity that we find in the video of Ken Miller which I posted in my last entry on this subject:

Dembski …. recognizes that Darwinists wrongly characterizes irreducible complexity as focusing on the non-functionality of sub-parts. Conversely, pro-ID biochemist Michael Behe, who popularized the term “irreducible complexity,” properly tests it by assessing the plausibility of the entire functional system to assemble in a step-wise fashion, even if sub-parts can have functions outside of the final system. The “leap” required by going from one functional sub-part to the entire functional system is indicative of the degree of irreducible complexity in a system. Contrary to the NAS’s assertions, Behe never argued that irreducible complexity mandates that sub-parts can have no function outside of the final system.

Luskin also quotes Michael Behe the “Galileo” of ID who explains why the existence of the Type Three Secretory System, as a precursor of the E. Coli flagellum proves little:

At best the Type Three Secretory System represents one possible step in the indirect Darwinian evolution of the bacterial flagellum. But that still wouldn’t constitute a solution to the evolution of the bacterial flagellum. What’s needed is a complete evolutionary path and not merely a possible oasis along the way. To claim otherwise is like saying we can travel by foot from Los Angeles to Tokyo because we’ve discovered the Hawaiian Islands. Evolutionary biology needs to do better than that.


Our ignorance of the structure of morphospace is cutting both ways here: Behe is claiming that absence of evidence of possible evolutionary routes in morphospace is evidence of absence. On the other hand evolutionists claim that absence of evidence of these evolutionary paths is not evidence of absence. Of course, neither party has provided killer evidence either way. So who has the edge here? The claims of ID theorists and evolutionists are, logically speaking, in complementary opposition rather than symmetrical opposition. It is surely an irony that of the two sides ID, in an elementary Popperian sense, is ostensively making the more easily refutable claims: ID is stating a quasi-universal, namely that evolutionary paths to the working structures like the flagellum of E. coli don’t exist – all we need to do is find one route and the proposition is ‘falsified’. Evolutionists, on the hand are making an existential statement; they are claiming that the routes do exist: such statements can’t be falsified, but they can be ‘verified’ by just one case - if they find that one case their claim is ‘proved’. But both sides have their work cut out; the sheer size and complexity of morphospace makes it a little more difficult to investigate than the Pacific! Moreover, the ontological complexities of what is basically a historical subject will no doubt scupper any claims of either absolute falsification or verification and at best only evidence tipping the balance in one direction or the other is likely to be found.

However, having said that the existence of “islands” such as the TTSS do start to weaken the ID case: the ‘irreducible complexity’ of the flagellum structure is thus less irreducible than we were lead to believe; the TTSS sets a precedent that starts to erode blanket claims that evolutionary paths don’t exist. If Behe is so demanding as to require evolutionists to map out a full path, then it is only fair that evolutionists are as equally demanding and require ID theorists to show that such paths don’t exist. So given that neither party can easily provide a suite of evidence that comprehensively proves their case, we have to plump for partial evidence weighing the case in either direction. Thus, in this weaker sense the TTSS is evidence in favor of evolution, whatever Behe says.

Since the ID theorists have come up with no absolute proof that evolutionary routes to their ‘irreducibly complex’ structures don’t exist, they are forced to sit with a passive uneasiness hoping that such paths don’t pop out of the scientific woodwork. There is a marked difference in the strategic positions of the opposing sides: ID theorists are in passive defense, hoping that evolutionary paths through morphospace will not be found. Evolutionist, on the other hand have the initiative – they can imaginatively and proactively search for solutions – and who knows, they may yet come up with the goods. Frankly from a strategic point of view I would rather be an evolutionist.


One final question that Luskin addresses is this: Is ID science? Luskin says, Yes, of course it is! True, the notion of irreducible complexity as a bare idea is an intelligible notion that can be investigated empirically and logically, although as it deals with the structure of that complex beast we call morphospace, scientists have their work cut out. However, it is when ID theorists jump from irreducible complexity to the operation of intelligence, that the waters start to muddy. When we discover an archeological object that looks like an artifact the working assumption is that it is human intelligence has been at work. We can test this assumption in relation to the known traits of human beings. But ID does not identify the source of the artifaction and in spite of using the scientific sounding term ‘Intelligent Design’, that intelligence is a wild card: it is a naked intelligence of unknown powers and motives. This throws into sharp relief such questions as: What is the nature of intelligence? How does intelligence achieve what it does? Why is intelligence needed to create certain structures? It is these sorts of questions I hope to probe and make some progress with in this series of posts.

Monday, January 14, 2008

The Intelligent Design Contention: Part 3

Ken Miller
Here is a video of a lecture from the biologist Ken Miller (see picture) speaking out against Intelligent Design. Miller is a Catholic and evolutionist. I don’t fully agree with all that he says about the nature of science: I don’t think a clear cut distinction can be made between the ‘natural’ and ‘supernatural’, with science only dealing in the former; the ontological and epistemology categories in our world are too blended for us to sharply partition the world in this way. However, the philosophy of science isn’t the issue I wish to pursue here. In this connection I am more interested in Miller’s comments on the ‘science’ of ID. Miller uses his area of expertise in biology to effect when he successfully challenges the ID notion of irreducible complexity, especially in connection with the E. Coli flagellum, the immune system and the blood-clotting cascade. He shows the ID view that parts of these systems are of no use by themselves to be false. He also makes some very notable observations on the tactics being used by the ID community.

Miller makes a particularly compelling point (apparently one of the points that helped carry the Dover trial ruling against ID) when he suggests that the introduction of ID into classrooms under the rubric ‘Teach the Controvesy’ creates a false dichotomy between science and religion; That is, it frames the debate to look as though it is ‘naturalistic’ evolution vs. ‘supernatural’ creation by God with, of course, ID coming in on the side of God against those who, like myself, favour evolution. Ironically, many atheists would agree with this framing. This is typical, typical of so much evangelicalism – lines are drawn in order to define the ‘view of the righteous’ and wo betide you if you find yourself on the wrong side of the line. In this way spiritual duress is applied and this has the effect of pressuring believers to fall into line.

Not that Miller’s own denomination doesn’t have a history of pressuring believers to fall into line. Although I am very uncomfortable with centrally controlled religion, in this case Miller’s denomination is reaping a benefit: If the leadership just happen to take a reasonable view on an issue this can make itself felt all the way down the line (but, of course, the reverse happens as well!). In contrast evangelicalism is highly fragmented, with local fanatics often securing and commanding ignorant and gullible followings.

Friday, January 04, 2008

The Intelligent Design Contention: Part 2

Intelligent Design Examples
The de-facto symbol, rallying emblem, and prototype of the Intelligent Design School is the propulsion flagellum of the Bacteria Escherichia coli. A complex motor like construction delivers the revolutions to the flagellum. The ID case may derive some of its compelling quality from diagrammatic representations of the flagellum drive. These diagrams (see picture) show a structure with rotary components that has at least a superficial resemblance to a piece of human engineering. This resemblance may help elicit the intuitive gut reaction “An inventor must have designed and made it!”. Just looking at representations of the flagellum drive sets the mind up to be more susceptible to the ID contention that this structure of harmonized components must have come together in one grand slam creative act overseen by an Intelligent Designer. Clearly this designer knew all about wheel bearings long before humans were rolling stone monuments of Eygptian kings on logs. Inventors, particularly male inventors, have never been able to get the invention of the wheel out of their heads and when they see a wheel they know there must be a like mind around somewhere.
Other examples of biological engineering used to promote ID are the blood clotting cascade and the immune system (The arguments ‘for and against’ here can be locked onto using Google). These molecular level systems do not just depend on the production of a single protein but consist of a molecular 'industrial process' like production line of inter-dependent proteins that achieves the required result. Remove one protein from this production line and the functionality of the system is severely compromised. Thus, if vital biological systems like blood clotting and the immune system fail to function for want of a single component, then the organism hosting these substructures becomes unstable and dies. In the abstract the ID argument is this: how could all these parts have come together without intelligence? For clearly, ID theorists argue, they must have come together as a whole because removal of any one part leads to failure of function and death. These systems, they claim, cannot be made any simpler; that is, they cannot be reduced – they are ‘irreducibly complex’. The two ‘big name’ Christian theorists associated with the defense of the concept of irreducible complexity are Michael Behe and William Dembski. In some Christian circles they are megastars, Davids fighting courageously against the Goliaths of evolutionary theory.

I recently heard about another ID theorist whilst reading reading Sandwalk, the blog of atheist Larry Moran. Sandwalk reported (critically, of course) on the work of Canadian ID theorist Kirk Durston who is researching proteins, the active chemical ‘bricks’ of living things. To carry out their function the long molecular strands comprising protein molecules must be folded into appropriate shapes. According to Durston there comes a point when incremental changes in the molecular sequence of proteins completely disrupts their folding, thereby making them non-functioning. Durston is claiming that if changes in the molecular structure of proteins go beyond a certain magnitude threshold they cannot do their job. Once again the ID case rests on the difficulty of conceiving how certain biological structures could have come about except as complete up and running systems. The concept of irreducible complexity does not recognize half-measure structures – structures either work on they don’t. Biological solutions, it is claimed, have no sense of nearness or vicinity attached to them – they have to be either bang on target or they are a complete miss.

The ID vs. Darwin debate usually centers round specific organic examples like the prototypes I have given above. There seems to be a reason for this example-by-example treatment. Morphospace is a colossal and hugely complex platonic construction, highly inhomogeneous in its possibilities, and embracing objects of different types and levels – from atomic configurations that make up proteins, through the molecular reactions of protein production lines, to the micro engineering of E. Coli. One could of course, introduce even higher types – e.g. fully blown organisms or an ant’s nest (which is effectively a structure made of many individual organisms). Another tricky facet of morphospace is that environment has a critical bearing on the stability of structures; a structure that is stable in one environment might be unstable in another. Also, a structure may effect or become part of an environment thus giving rise to the non-linear effects of feedback. Moreover, strictly speaking mophospace doesn’t just include biological structures, but just about everything that we can conceive of, and more, that can be made from atomic matter; this even includes human artifacts like a bar of soap, a house, Lego, a jet fighter, a computer, or a Von-Neumann machine. The class of objects covered by morphospace are so varied in typing and level, with so many unknown degrees of freedom, so open ended in functional possibilities that general analytical treatment of this ‘space’ seems to be beyond the wit of man and hence the need to dwell on the specific rather than the general.
.
But in spite of all this ID theorists are committed to the notion that in critical biological regions morphospace possesses a feature that is a barrier to evolution, or at least in the examples they constant hark back to: they see these example biological structures standing isolated in a kind of design vacuum; that is, they are not surrounded by lower ‘marks’ or similar structures that could be part of an equally stable nexus. Thus, according to ID theory they have no stable neighboring structures that evolutionary gradualism could have passed through en route to the ‘final design’. ID theory therefore swings on the assumed disconnectedness of the regions of structural stability in morphospace. This is the rather brave and quite possibly wrong assumption on which ID theory rests, but it seems that the conceptual intractability of morhpospace, so vast in its ramifying possibilities and typing, makes it difficult for evolutionists to refute this assumption with one-liners. The result is much frustration, annoyance and abrasive dialogue.

Monday, December 31, 2007

Fireside Problem

Over the Christmas period I have been grappling with a problem that has been hanging around with me for some time.
Now, it is possible to generate highly disordered sequences of 1s and 0s using an algorithm. An algorithm is also expressible as a sequence of 1s and 0s. However, whereas disordered sequences may be long – say of length Lr, the length of the algorithm sequence generating it, Lp, may be short. That is Lr is less than Lp. The number of possible algorithm sequences will be 2^Lr. The number of possible disordered sequences will be nearly equal to 2^Lp. So, because 2^Lp is much less than 2^Lr, then it is clear that short algorithms can only generate a very limited number of all the possible disordered sequences. For me, this result has a counter intuitive consequence: It suggests that there is a small class of disordered sequences that have a special status - namely the class of disordered sequences that can be generated algorithmically. But why should a particular class of disordered sequences be so mathematically favoured when in one sense every disordered sequence is like every other disordered sequence in that they all have the same statistical profile? My intuitions suggest that all disordered sequences are in some sense mathematically equal, and yet it seems that algorithms confer a special status on a small class of these sequences.
I think I now know where the answer to this intuitive contradiction lies. To answer it we have to go back to the network view of algorithmic change. If we take a computer like a Turing machine then it seems that it wires the class of all binary sequences into a network in a particular way, and it is the bias introduced by the network wiring that leads to certain disordered configurations being apparently favored. It is possible to wire the network together in other ways that would favour another class of disordered sequences. In short the determining variable isn’t the algorithm, but the network wring, which is a function of the computing model being used. It is the computing model inherent in the network wiring that has as much degree of freedom as does a sequence as long as Lr. Thus, in as much as any disordered sequence can have an algorithmically favoured position depending on the network wiring used by the computing model, then in that sense no disordered sequence is absolutely favoured over any other.

Well, that’ll have to do for now; I suppose I had better get back to the Intelligent Design Contention.

Wednesday, December 12, 2007

The Intelligent Design Contention: Part 1

The Contention
For some months now I have been reading Sandwalk, the blog of atheist Larry Moran, or as he dubs himself ‘A Skeptical Biochemist’: (He can’t be that skeptical otherwise he would apply his skepticism to atheism and graduate from ‘skeptic’ to ‘doubter’.) One reason for going to his blog was to help me get a handle on the evolution versus Intelligent Design (ID) debate. The ID case hinges on a central issue that I describe in this post.

The ID contention is, in the abstract, this: If we take any complex organized system consisting of a set of mutually harmonized parts that by virtue of that harmony forms a stable system, it seems (and I stress seems) that any small changes in the system severely comprises the stability of that system. If these small changes lead to a break down in stability how could the system have evolved given that evolution is a process requiring that such systems be arrived at by a series of incremental changes?

Complex organized systems of mutually harmonized components are termed ‘Irreducibly Complex’ by ID theorists. The term ‘irreducible’ in this context refers, I assume, to the fact that apparently any attempt to make the system incrementally simpler by say removing or changing a component, results in severe malfunction and in turn this jeopardizes the stability of the system. If the apparent import of this is followed through then it follows that there are no ‘stable exit routes’ by which the system can be simplified without compromising stability. If there are no ‘stable exit routes’ then there are no ‘stable entry routes’ by which an evolutionary process can ‘enter’.

Mathematically expressed:

Stable incremental routes out = stable incremental routes in = ZERO.

In the ID view many biological structures stand in unreachable isolation, surrounded by a barrier of evolutionary 'non-computability'. Having believed they have got to this point ID theorists are then inclined to make a three fold leap in their reasoning: 1) They reason that at least some aspects of complex stable systems of mutually harmonized parts have to be contrived all of piece; that is, in one fell swoop as a fait accompli. 2) That the only agency capable of creating these designs in such a manner requires intelligence as a ‘given’. 3) That this intelligence is to be identified with God.

I get a bad feeling about all this. Once again I suspect that evangelicalism is unrinating up the wrong lamp post. Although the spiritual attitudes of the ID theorists look a lot better than some of the redneck Young Earth Creationists I still feel very uneasy. So much that one is supposed to accept under the umbrella of evangelicalism is often administered with subtle and sometimes not so subtle hints that one is engaged in spiritual compromise if one doesn’t accept what is being offered. I hope that at least ID theory will not become bound up with those who apply spiritual duress to doubters.

Wednesday, November 14, 2007

Generating Randomness

I have been struggling to relate algorithmics to the notion of disorder. In my last post on this subject I suggested that an algorithm that cuts a highly ordered path through a network of incremental configuration changes in order to eventually generate randomness would require a ‘very large’ number of steps in order to get through the network. This result still stands, and it probably applies to a simple algorithm like the counting algorithm, which can be modeled using simple recursive searching of a binary tree network.

But, and here’s the big but, a simple counting algorithm does not respond to the information in the current results string (the results string is the equivalent of the tape in a Turing machine). When an algorithm starts receiving feedback from its ‘tape’, then things change considerably – we then move into the realm of non-linearity.

Given the ‘feedback’ effect and using some assumptions about just what a Turing machine could at best achieve I have obtained a crude formula giving an upper limit on the speed with which a Turing machine could development a random sequence. A random sequence would require at least ‘t’ steps to develop where:

t ~ kDn/log(m)
and where ‘k’ is a constant whose factors I am still pondering, ‘D’ is the length of the sequence, ‘n’ is an arbitrarily chosen segment length whose value depends on the quality of randomness required, and ‘m’ is the number of Turing machine states. This formula is very rough, but it makes use of a frequency profile notion of randomness I have developed elsewhere. The thing to note is that the non-linearity of algorithmics (a consequence of using conditionals) means that the high complexity of randomness could conceivably be developed in linear time. This result was a bit of a surprise as I have never been able to disprove the notion that the high complexity of randomness requires exponential time to develop it – a notion not supported by the speed with which ‘random’ sequences can be generated in computers. Paradoxically, the high complexity of randomness (measured in terms of its frequency profile) may not be computationally complex. On the other hand it may be that what truly makes randomness computationally complex is that for high a quality randomness D and n are required to become ‘exponentially’ large in order to obtain a random sequence with a ‘good’ frequency profile.

Thursday, September 27, 2007

The Virtual World

Why don’t I accept the multiverse theory, I have to ask myself. One reason is that I interpret quantum envelopes to be what they appear to be: namely, as objects very closely analogous to the guassian envelopes of random walk. Guassian random walk envelopes are not naturally interpreted as a product of an ensemble of bifurcating and multiplying particles (although this is, admittedly, a possible interpretation) but rather a measure of information about a single particle. ‘Collapses’ of the guassian envelope are brought about by changes in information on the whereabouts of the particle. I see parallels between this notion of ‘collapse’ and quantum wave collapse. However, I don’t accept the Copenhagen view that sudden jumps in the “state vector” are conditioned by the presence of a conscious observer. My guess is that the presence of matter, whether in the form of a human observer or other material configurations (such as a measuring device) are capable of bringing about these discontinuous jumps. (Note to self: The probability of a switch from state w) to state v) is given by (wv) and this expression looks suspiciously like an ‘intersection’ probability.)

The foregoing also explains why I’m loath to accept the decoherence theory of measurement: this is another theory which dispenses with literal collapses because it suggests that they are only an apparent phenomenon in as much as they are an artifact of the complex wave interactions of macroscopic matter. Once again this seems to me to ignore the big hint provided by the parallels with random walk. The latter lead me to view wave function ‘collapse’ as something closely analogous to the changes of information which take place when one locates an object; the envelopes of random walk can change discontinuously in a way that is not subject to the physical strictures on the speed of light and likewise for quantum envelopes. My guess is that the ontology of the universe is not one of literal particles, but is rather an informational facade about virtual particles; those virtual particles can’t exceed the speed of light, but changes in the informational envelopes in which the virtual particles are embedded are not subject to limitations on the speed of light.

Monday, September 24, 2007

Well Versed

I have been delving into David Deutsch’s work on Parallel universes. Go here, to get some insight into Deutsch’s recent papers on the subject. I’m not familiar enough with the formalism of quantum computation to be able to follow Deutsch’s papers without a lot more study. However, some salient points arise. In this paper dated April 2001 and entitled “The Structure of the Multiverse” Deutsch says:
“.. the Hilbert space structure of quantum states provides an infinity of ways of slicing up the multiverse into ‘universes’ each way corresponding to a choice of basis. This is reminiscent of the infinity of ways in which one can slice (‘foliate’) a spacetime into spacelike hypersurfaces in the general theory of relativity. Given such a foliation, the theory partitions physical quantities into those ‘within’ each of the hypersurfaces and those that relate hypersurfaces to each other. In this paper I shall sketch a somewhat analogous theory for a model of the multiverse”
That is, as far as I understand it, Deutsch is following the procedure I mentioned in my last blog entry – he envisages the relationships of the multiple universes similar to the way in which we envisage the relationships of past, present and future.

On the subject of non-locality Deutsch in
this paper on quantum entanglement, states:
“All information in quantum systems is, notwithstanding Bell’s theorem, localized. Measuring or otherwise interacting with a quantum system S has no effect on distant systems from which S is dynamically isolated, even if they are entangled with S. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that under some circumstances (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) quantum information is transmitted through ‘classical’ (i.e. decoherent) channels.”
Deutsch is attacking the non-local interpretation of certain quantum experiments. In
this paper David Wallace defends Deutsch and indicates the controversy surrounding Deutsch’s position and its dependence on the multiverse contention. In the abstract we read:

It is argued that Deutsch’s proof must be understood in the explicit context of the Everett interpretation, and that in this context it essentially succeeds. Some comments are made about the criticism of Deutsch’s proof by Barnum, Caves, Finkelstein, Fuchs and Schack; it is argued that the flaw they point out in the proof does not apply if the Everett interpretation is assumed. "

And Wallace goes on to say:

“…it is rather surprising how little attention his (Deutsch’s) work has received in the foundational community, though one reason may be that it is very unclear from his paper that the Everett interpretation is assumed from the start. If it is tacitly assumed that his work refers instead to some orthodox collapse theory, then it is easy to see that the proof is suspect… Their attack on Deutsch’s paper seems to have been influential in the community; however, it is at best questionable whether or not it is valid when Everettian assumptions are made explicit.”

The Everett interpretation equates to the multiverse view of quantum mechanics. Deutsch’s interpretation of QM is contentious. It seems that theorists are between a rock and a hard place: on the one hand is non-locality and absolute randomness and on the other is an extravagant ontology of a universe bifurcating everywhere and at all times. It is perhaps NOT surprising that Deutsch’s paper received little attention. Theoretical Physics is starting to give theorists that “stick in the gullet” feel and that’s even without mentioning String Theory!

Saturday, September 22, 2007

Quantum Physics: End of Story?

News has just reached me via that auspicious source of scientific information, Norwich’s Eastern Daily Press (20 September) of a mathematical break through in quantum physics at Oxford University. Described as “one of the most important developments in the history of science” my assessment of the report is that multiverse theory has been used to derive and/or explain quantum physics.

The are two things that have bugged scientists about Quantum Physics since it was developed in the first half of the twentieth century; firstly its indeterminism – it seemed to introduce an absolute randomness in physics that upset the classical mentality of many physicists including Einstein: “God doesn’t play dice with the universe”. The second problem which, in fact is related to this indeterminism, is that Quantum Theory suggests that when these apparently probabilistic events do occur distant parts of the universe hosting the envelop of probability for these events, must instantaneously cooperate by giving up their envelope. This apparent instantaneous communication between distant parts of the cosmos demanding faster than light signaling also worried Einstein and other physicists.

Multiverse theory holds out the promise of reestablishing a classical physical regime of local and deterministic physics, although at the cost of positing the rather exotic idea of universes parallel to our own. It achieves this reinstatement, I guess, by a device we are, in fact, all familiar with. If we select, isolate and examine a particular instant in time in our own world we effectively cut it of from its past (and future). Cut adrift from the past much about that instant fails to make sense and throws up two conundrums analogous to the quantum enigmas I mentioned above; Firstly there will be random patterns like the distribution of stars which just seem to be there, when in fact an historical understanding of star movement under gravity gives some insight into that distribution. Secondly, widely separated items, will seem inexplicably related – like for example two books that have identical content. By adding the time dimension to our arbitrary time slice the otherwise inexplicable starts to make sense. My guess is that by adding the extra dimensions of the multiverse a similar explanatory contextualisation has finally – and presumably tentatively - been achieved with the latest multiverse theory.

Not surprisingly the latest discovery looks as though it has come out of the David Deutsch stable. He has always been a great advocate of the multiverse. By eliminating absolute randomness and non-locality multiverse theory has the potential to close the system and tie up all the lose ends. Needless to say all this is likely to proceed against a background of ulterior motivations and may well be hotly contended, not least the contention that Deutsch has made the greatest discovery of all time!
Postscript:
1. The tying of all loose ends is only apparent; all finite human knowledge can only flower out of an irreducible kernel of fact.
2. Multiverse theory, unlike the Copenhagen interpretation of Quantum Mechanics, suggests that quantum envelopes do not collapse at all, but always remain available for interference. Hence it should in principle be possible to detect the difference between these two versions of Quantum Theory experimentally.