Pages

Thursday, December 24, 2009

Darwin Bicententary Part 30: The Mystery of Life’s Origin, Chapter 9

Continuing my consideration of the three online chapters of Thaxton, Bradley and Olsen’s book “The Mystery of Life’s Origin

Absence of Evidence…

In chapter 9 of the Mystery of Life’s origin Thaxton, Bradley & Olsen start by expressing the essential mystery of life’s origin:

In Chapter 7 we saw that the work necessary to polymerize DNA and protein molecules from simple biomonomers could potentially be accomplished by energy flow through the system. Still, we know that such energy flow is a necessary but not sufficient condition for polymerization of the macromolecules of life. Arranging a pile of bricks into the configuration of a house requires work. One would hardly expect to accomplish this work with dynamite, however. Not only must energy flow through the system, it must be coupled in some specific way to the work to be done.

What TB&O are saying here is that although the energy required to carry out the work of organizing atomic and molecular units into a living configuration may be available, that energy in itself is useless without some kind of information engine directing how it is to be used. An analogous situation arises if one has a large work force of men; this work force is all but useless unless they have the information on how to act.

In chapter 9 TB&O look at the crucial question of abiogenesis; that is, the question of whether disorganized elemental matter has the wherewithal to organize itself into complex living structures. They consider a variety of proposed mechanisms: Pure chance, natural selection, self ordering tendencies in matter, mineral catalysis, non-linearity in systems far from equilibrium and the direct production of proteins and DNA. Needless to say they are of the opinion that at the time of writing no substantive model of abiogenesis exists. I am not aware that the situation has improved substantially in favour of abiogenesis since TB&O’s book was written in the mid eighties.

TB&O’s main point comes out very clearly when they consider experimental attempts to directly form polypeptides. TB&O believe that most theories of abiogenesis founder in one important respect: Whilst they accept that under the right disequilibrium conditions organic polymers can be synthesised, they point out that these polymers are randomly configured, and thus by implication contain no useful information content. In TB&O’s opinion the chief challenge to theories of abiogenesis is the question of what supplies the necessary “configurational entropy work”. In their words:

Virtually no mechanism with any promise for coupling the random flow of energy through the system to do this very specific work has come to light. .

TB&O tender experimental results which suggest that the formation of polypeptides shows no bias in the frequency of connections:

The polymerization of protein is hypothesized to be a nonrandom process, the coding of the protein resulting from differences in the chemical bonding forces. For example, if amino acids A and B react chemically with one another more readily than with amino acids C, D, and E, we should expect to see a greater frequency of AB peptide bonds in protein than AC, AD, AE, or BC, BD, BE bonds.

Furthermore, the peptide bond frequencies for the twenty-five proteins approach a distribution predicted by random statistics rather than the dipeptide bond frequency measured by Steinman and Cole. This observation means that bonding preferences between various amino acids play no significant role in coding protein. Finally, if chemical bonding forces were influential in amino acid sequencing, one would expect to get a single sequence (as in ice crystals) or no more than a few sequences, instead of the large variety we observe in living systems. Yockey, with a different analysis, comes to essentially the same conclusion.

Coupling the energy flow through the system to do the chemical and thermal entropy work is much easier than doing the configurational entropy work. The uniform failure in literally thousands of experimental attempts to synthesize protein or DNA under even questionable prebiotic conditions is a monument to the difficulty in achieving a high degree of information content, or specified complexity from the undirected flow of energy through a system.

This kind of evidence supports TB&O’s belief that there is no natural information mechanism that can correctly configure an arrangement of atoms:

We have noted the need for some sort of coupling mechanism. Without it, there is no way to convert the negative entropy associated with energy flow into negative entropy associated with configurational entropy and the corresponding information. Is it reasonable to believe such a "hidden" coupling mechanism will be found in the future that can play this crucial role of a template, metabolic motor, etc., directing the flow of energy in such a way as to create new information?

*****

My own reaction to TB&O’s rather negative assessment of abiogeneis is that if they are genuinely interested in the possibility of a natural route from inorganic matter to functioning organisms they are looking in the wrong place and therefore find what they expect: No evidence of abiogenesis. But if they do start looking in the right place there is a major epistemological snag: it is in the nature of abiogenesis (and evolution as well) to be difficult to observe; that is, it is an epistemically challenging domain. 

If natural abiogenesis has occurred on our planet then, as I have already mooted in this Darwin Bicentenary series, the coupling mechanism providing the information for abiogenesis is likely to be found in the arrangement of structures in morphospace; for abiogenesis to work there must exist a connected class of stable structures with complexities ranging from the very simple to the very complex; that is abiogenesis requires biological structures to be reducibly complex. This connected class of biological structures is an object that is neither dynamic nor visible but instead is a static platonic structure which, if it exists, must be implicit in the laws of physics. The information for abiogenesis and evolution is not going to be found in chemistry that encodes the right information in biopolymers in a straight forward way. The information structures, if they exist, will be found in a mathematical object spanning morphospace, an object that has no material reification.

The scientific challenge faced by those committed to abiogenesis (and evolution too) should not be underestimated because abiogenesis may be a computationally irreducible processes. If this is true then it follows that there is no simpler analytical proof of the existence of abiogenesis other than to observe the actual processes that generate life. In this case the evidence of abiogenesis will rest less on theoretical analysis than it will on observation. Hence an absence of fossil evidence becomes a serious barrier to scientific progress in abiogenesis and will add plenty of fuel to the fire of anti-evolutionism. Another problem for theories of abiogenesis is that if there is a natural route from inorganic matter to living structures then fossil evidence is likely to largely comprise of unstructured chemical signatures of equivocal interpretation. Researchers are thus left with little choice but to propose very tentative and speculative chemical scenarios that might look as though they have the potential to generate life. But if TB&O have presented the evidence fairly then it is clear that no consensus theory of abiogenesis exists, let alone a theory with a robust observational base.

TB&O in common with other anti-evolutionists conflate information and complexity:

Regularity or order cannot serve to store the large amount of information required by living systems. A highly irregular, but specified, structure is required rather than an ordered structure.

Whilst I would give a qualified acceptance to the idea that information is conserved in that it requires prerequisite resources either in the form of highly improbable pre-conditions or the huge spaces of a multiverse, is it not true in general that structural complexity cannot be “created” from regularity in relatively short times. Fractals and pseudo random sequences are an example of a “free lunch” complexity derived from simple starting conditions in polynomial computation times. As yet I know of no principled reason why the complex arrangements in morphospace required by abiogenesis cannot also be generated relatively rapidly from the right physical regime of laws. ID theorists may tell us that “equations and equations don’t generate Complex Specified Information” but in spite of that assertion equations can A) be the depository of large amounts of information, B) create complexity in polynomial time, and C) therefore be used to specify complex static forms. (See my last post)

The anti-evolutionists appear to rule out abiogenesis on the basis of the methodological rule “absence of evidence is evidence of absence”. But ardent evolutionists may claim that the onus is on ID theorists to demonstrate that biological structures are irreducibly complex and therefore cannot be product of abiogenesis and evolution. See this post of mine for the ironies of this stand-off between evolutionists and anti-evolutionists. The irony is further compounded when one hears atheists attacking theism on the basis that there is no God because “absence of evidence is evidence of absence”. God and evolution are both very complex objects so perhaps it is not surprising that like abiogenesis it is in the nature of God to be difficult to observe!

Saturday, December 12, 2009

Darwin Bicentenary Part 29: Dembski and Marks’ Latest Paper.

My Unfinished Business with the Anti-Evolution ID Community

William Dembski and Robert Marks’ latest published paper can be found by following a link from this Uncommon Descent post. Its content is very reminiscent of this paper which I considered here here and here*. In this latest paper, however, Marks and Dembski shift the focus to what they call Bernoulli’s principle of insufficient reason, a principle that justifies their assuming equal a priori probabilities over a space of possibilities. It is likely that they feel the need to justify this principle further as their conclusions regarding the “conservation of information” stand on this assumption. I raised this very issue in this discussion on UD where I said:

Yet another issue is this: When one reaches the boundary of a system with “white spaces” beyond its borders how does one evaluate the probability of the systems options and therefore the system’s information? Is probability meaningful in this contextless system? I am inclined to follow Dembski’s approach here of using equal a priori probabilities, but I am sure this approach is not beyond critique.

The choice of using Bernoulli’s PrOIR arises when one is faced with a set of possible outcomes for which there is no known reason why one outcome should be preferred over another; on this basis Bernoulli’s PrOIR then assigns equal probabilities to these outcomes. However, probability (as I proposed in my paper on probability) is a quasi-subjective variable and thus varies from observer to observer and also varies as an observer’s knowledge changes. In particular probability estimates are liable to change if knowledge of the context in which the outcomes are embedded increases. For example consider a six sided die. Initially, if we know nothing about the die then using PrOIR we may assign an equal probability of 1/6 to each side. However, if after, say, examination of the die we find it to be biased, or find that it repeatedly returns lopsided frequencies when it is thrown, then these larger contexts of observation have the effect of weighting the probabilities of the 6 possible outcomes. Thus given this outer context of observation we find that we can no longer impute equal probabilities to the six possible outcomes. But, and this is Dembski and Marks’s essential point, the cost of buying better than PrOIR results on the die is paid for by an outer system that itself has improbable lopsided probabilities. From whence comes this improbable skew on the outer system? This skew can only itself come from a wider context that itself is improbably skewed and so on. According to Dembski and Marks it is not possible, in realistically sized systems, to destroy this improbable loading. Ergo, some kind of conservation law seems to be operating, a conservation law Dembski and Marks call the conservation of information.

I believe Dembski and Marks’ essential thesis to be unassailable. Bernoulli’s PrOIR can be used to set up a null hypothesis that leads one to conclude that the extremely remote probabilities of living structures point to a kind of loaded die in favour of life. Thus, evolution, if it has taken place, could not have happened without biasing its metaphorical die with what Dembski and Marks calls “active information”. In fact as I discussed in this post even atheist evolutionists, in response to one of Dembski and Marks’ earlier papers admit that evolution must be tapping into the resources of a physical regime loaded in favour of life, most likely by the laws of physics; but from whence came our particular physical laws? For these atheists, however, this is effectively a way of shelving the problem by projecting it onto the abstruse and esoteric realm of theoretical physics where it awaits solution by a later generation of scientists. Hiding up the mystery of the origins of living things in the ultimate genesis of the laws of physics allows atheists to get on with their day to day life without any worries that some unknown intelligence could be lurking out there.

It is ironic that multiverse theory is a backhanded acknowledgement of Dembski & Marks’ essential thesis. Speculative multiverse theory assumes that in an all but infinite outer context the butter of probability is spread very thinly and evenly. Hence, because no configuration is given a probabilistic bias over any others it seems unlikely that there is an intelligence out there fixing up the gambling game in favour of life. But if we are to conclude that we can shrug off life as “just one of those chance things” probability theory requires the multiverse to be truly huge. These immense dimensions are required to offset (or "pay for" in D&M's terms) the minute probability of our own apparently “rigged” cosmic environment. That the multiverse must be so extravagantly large in order to make our highly improbable context likely to come up in the long run so to speak, is eloquent testimony of just how active Dembski’s and Marks’ active information must be if our own cosmos should in fact prove to be a one-off case.

We must acknowledge of course that in the final analysis Dembski & Marks are anti-evolution ID theorists and therefore have their sights on “evilution”. They conclude that evolution, if is to work in realistic time, must be assisted by a large dollop of active information; that is, the die must be loaded in its favour. As I see it there is no disagreeing with this general conclusion, but the irony is that just as the multiverse theorists acknowledge the concept of active information in a backhanded way, Dembski & Marks provide us with the general conditions of the scenario that evolution needs in order to succeed.

For Dembski & Marks’ paper, as have all the papers on “active information” I am aware of, provides no killer argument against evolution. In fact if anything it points to the general criteria that must be fulfilled if evolution is to work. They tell us that if evolution has happened then somewhere there is a bias, an improbable skew hidden up in the system. The first port of call in the search for this bias is, of course, the laws of physics. But as a rule, anti-evolution ID theorists will try to tell us that simple equations can’t create information. Their argument, such as I recall it, seems to be roughly based on the following lines.

1. The laws of physics are to be associated with necessity

2. Anything that is necessary has probability of 1.

4. The information content of anything with probability of 1 is zero

5. Therefore equations have no information content

6. Information is conserved. (I give this point a qualified acceptance)

7. Therefore from 5 and 6 it follows that equations can’t create information

The main fault with the above argument is point 1. Christian anti-evolution ID theorists often seem to be subliminal dualists and this inclines them to contrast the so called “natural forces” over and against the superior creator God. These inferior "natural forces" they inappropriately refer to as “chance and necessity”. The so-called “necessity” here is identified with the laws of physics or succinct mathematical stipulations in general. But such mathematical stipulations are themselves an apparent contingency and, as far as we are concerned, don’t classify as necessity; therefore they can embody information.

To see how laws can be the depository of information, it helps to understand that the quantity “information”, as Dembski and Marks use it (= - log[p] ), can only pick up high improbabilities; it is not very sensitive to the exact details of the source of those improbabilities. This is because it lumps together a field of independent probabilities into a single quantity by summing up the logarithms of those probabilities. Therefore “information” as an index cannot distinguish the difference between single probabilities and complex fields of probability. In short information is not a good measure of configuration. Thus single items of probability, if they have sufficient improbability, can embody as much information as collections or configurations of probabilities. This means that although it is true that equations don’t create information, they may yet be the depository of information in their own right if they are themselves highly improbable items; and this can be the case if those equations have been selected from a huge space of possibility over which Bernoulli’s PrOIR is applied. Equations are far from what anti-evolutionist ID theorists call “necessity”. In fact irony is piled upon irony in that we need look no further than Dembski and Marks’ paper for evidence that relatively mathematical simple stipulations may be the depository of active information.

In Dembski and Marks’ latest paper we read this:

Co-evolutionary optimization uses evolutionary tournament play among agents to evolve winning strategies and has been used impressively to identify winning strategies in both checkers [25] and chess [26]. Although co-evolution has been identified as a potential source for an evolutionary free lunch [69], strict application of Benoulli’s PrOIR suggests otherwise.

Dembski and Marks then go on to rightly observe that such systems have a built in metric of gaming success and that this “loads the die” with active information:

Claims for a possible “free lunch” in co-evolution [69] assume prior knowledge of what constitutes a win. Detailed analysis of co-evolution under the constraint of Bernoulli’s PrOIR remains an open problem.

Once again I agree with Dembski and Marks; the co-evolutionary game must be suitably loaded for complex game winning strategies to evolve. But this is to miss the giveaway point here. It seems that all one needs do is to define some succinct game playing rules, then impose a constraint in the form of some relatively simple game wining metric and then given these two constraints the “thermal agitations” of randomness are able to find those complex game winning strategies in polynomial time. A presumed corollary here is that the space of gaming strategies is reducibly complex. Thus it follows that the stipulation of the rules of the game and the selection metric are sufficient to define this reducibly complex space of game wining strategies. These stipulations have the effect of so constraining the “search space” that the “random” agitations are no longer destructive but are the dynamic that seeks out the winning solutions.

The foregoing is an example of an issue I have repeatedly raised here and have even raised on Uncommon Descent. The issue concerns the question of whether in principle it is possible to define complex structures (such as living things and games playing strategies) using relatively simple reductionist specifications and then find these specified complex structures in polynomial time. The generation in polynomial time of complex fractals and pseudo random sequences are examples of a polynomial time map from reductionist mathematical specifications to complex outputs. In the checkers game example the reductionist specifications are the gaming rules and the metric that selects successful strategies and these together define a complex set of winning game strategies that can be arrived at in polynomial time using an algorithmic dynamic which uses random “thermal agitations”. Evolutionary algorithms that identify game playing strategies are an example where simple mathematical stipulations map to complex outputs via a polynomial time algorithm. Of course, because evolutionary algorithms work for games doesn’t necessarily mean that the case is proved for biological evolution given the laws of physics, but the precedent set here certainly raise questions for me against the anti-evolutionist ID theorists tacit assumption that evolution in general is unworkable. But at the very least it seems that evolution does have an abstract “in principle” mathematical existence.


It is over this matter that have I have unfinished business with the anti-evolution ID theorists. I have failed to get satisfaction from Uncommon Descent on this matter. I agree with the general lesson that barring appeal to very speculative multiverse scenarios information can’t be created from nothing – certainly not in polynomial time. But where I have an issue with the ID pundits of Uncommon Descent is over the question of whether or not it is possible to deposit the high information we observe in living structures in reductionist specifications such as the laws of physics. I suggest that in principle simple mathematical stipulations can carry high information content for the simple reason that maps which tie together simple algorithms to complex outputs via a polynomial time link are very rare in the huge space of mathematical possibility, and thus have a low probability (assuming PrIOR) and therefore a high information content. The confusion that I see present amongst Uncommon Descent contributors is the repeated conflation of high information content with complexity; as I have said, information as a measure lumps together a configuration's improbability into one index via a logarithmic addition over its field of probabilities and thus it cannot distinguish single high improbability items from complex configurations of independent probabilities. Elemental physics may therefore be the depository of high information.

As I look at the people at the top of anti-evolution ID community like Dembski and Marks, it seems to me that the implications of their work are not fully understood by the rank file ID supporter. For that supporter, it seems, leans on his academic heroes, quite convinced that these heroes provide an intellectual bulwark against the creeping "atheist conspiracy". The impressive array of academic qualifications that these top ID theorists possess is enough, it seems, to satisfy the average ID supporter and YEC that the battle against “evilution” has been won. The very human attributes of community identification, crowd allegiance, loyalty and trust loom large in this scientific debate.



* I originally (and perhaps wrongly) attributed this paper to Dembski; it has no author name although its URL has Marks’ name in the directory path.

Friday, December 04, 2009

University of Big Disappointments

This University of East Anglia “Climategate” business that William Dembski keeps banging on about is rather (too) close to home - literally almost within walking distance; a childhood stamping ground in fact. Fears, doubts, vulnerabilities come out in private emails; sometimes with a gloss of black humour, bluster and even superciliousness. Trouble is, nobody admits to having them, else they are exploited by opponents and easily misrepresented. But doubts, vulnerabilities and even fears are the stuff of genuine science, as I suggested in this post long before “Climategate” had blown up. There is such a thing as a “scientific attitude”. However, covering up, secrecy, self delusion, conceit, pride and above all epistemological arrogance are the antithesis of that attitude.


The Green House Effect at the University of East Anglia: Don’t cast the first stone if you live in a glass house like this.


Addendum 6/12/09. The Smith and Jones affair.

Below is a YouTube video that needs to be set beside the claims of those for whom this whole affair is a most wonderful windfall. Conspiracy theorist Alex Jones makes a brief and loud appearance on the video and is caught foaming at the mouth over the UEA emails. It's all very reminiscent of pentecostal evangelical Barry Smith's millennium bug conspiracy, which of course is now all but forgotten. Interestingly and ironically Smith's conspiracy theories implicated George Bush Senior in the new world order conspiracy; so are the religious right and the powerful environment poluting corporation bosses supposed to be part of the conspiracy or against the conspiracy? To add to the "we're all having a ball" feel about these e-mail revelations I am beginning to wonder when David Ike is going to make an appearance; in fact he's probably got a YouTube video out there already, but I just can't be bothered to look.

I wonder what William Dembski thinks of some of the company he is keeping?


Monday, November 30, 2009

From Spears to Aircraft


A Sophisticated Flying Machine: A series of incremental technological changes has brought this configuration into existence.

This post by GilDodgen on Uncommon Descent gives an indication of the central place that Irreducible Complexity (IC) has in the ID theoretical canon; everything in ID theory seems to be either an aspect of this principle or a detail. This is what GilDodgen says:

The more we learn the more it appears that almost everything of any significance in living systems is irreducibly complex. Multiple systems must almost always be simultaneously modified to proceed to the next island of function. Every software engineer knows this, and living things are fundamentally based on software.

Evolution in the fossil record is consistently characterized by major discontinuities — as my thesis about IC being a virtually universal rule at all levels, from the cell to human cognition and language, would suggest — and the discontinuity between humans and all other living things is the most profound of all. Morphological similarities are utterly swamped by the profound differences exhibited by human language, math, art, engineering, ethics, and much more.


GilDodgen then goes on to suggest that the vast differences between human beings and other primates are evidence of the discontinuity imposed by IC.

In biological terms the crucial concept is structural stability; that is the ability to self perpetuate. But this is only possible if the components of a biological structure serve to promote self perpetuation and they can only do this if they work; that is if they function correctly.

Crucially for ID, GilDodgen assumes that functionality only comes in absolutely isolated islands; that is any change of a structure that goes beyond a certain small threshold entails a loss of functionality and thus a non-viable unstable structure. Thus if IC is true then there are no incremental entry points in or out of a stable structure. Thus IC structures (by definition) have no stable precursor structures incrementally separated from them. Ergo, evolution cannot happen.

The inverse of the notion of IC is Reducible Complexity (RC); RC requires that the domains of functionality/stability are so connected that there exist ways of incrementally modifying a structure and yet a stable structure being the result. The RC conjecture is that there is class of functional/stable structures that form a fully connected set and that a broad range of structural complexity is found in this set; if such exists then a barrier to evolution is lifted.

Of great significance to ID theory is this quote from GilDodgen: “Multiple systems must almost always be simultaneously modified to proceed to the next island of function”. That simultaneous modification is the only way to reach the next island of stability is, in the mind of the ID theorist, sufficient evidence of the complete isolation of islands of stability in a huge sea of non-viability; for to jump the gap between islands of functionality using random changes would require the simultaneous adjustment of several features in just the right way. The probability of these changes coming together is utterly negligible and thus a barrier of improbability isolates the islands of stability. The only thing that we know capable of jumping the space between the islands is intelligence.

Currently I have several issues with the concept of IC. The following points are really areas of research rather than killer arguments against IC.

1) If the only way to reach a near neighbor stable structure is by a very rare simultaneous change of features, then the IC case holds. However, if a route exists to a stable neighborhood structures via a “path” consisting of a series of incremental changes to single features ,each of which results in a stable structure, then we have a Reducibly Complex connected set of structures. However, these "single feature change" pathways are likely to be a relatively rare occurrence in the space of all possible changes, thus giving the impression that they don’t exist.

2) Human technological advance has only been possible because the limited quantum of human intelligence (call it i) can leap the gap between islands of functionality. But, and here is the important point, i is not large enough to leap the gap between stone age spears and GilDodgen’s aircraft in one leap. However, it is clear that the islands of functionality in technological configuration space are close enough so that given the quantum of human intelligence, i, the human mind can leap the gaps in functionality leading up to an sophisticated flying machine. It is conceivable that there are structures out there that can never be reached by the human mind because they are effectively irreducibly complex with respect to the quantum of human intelligence i. Nevertheless, it is clear that given human technology, large parts of technological configuration space are connected enough (i.e they reducible complexity with respect to i) to make technological evolution possible. It seems that IC and RC are not binary opposites but come in degrees. Fortunately for us technological configuration space is reducibly complex with respect to i. But, and this is the 64 billion dolllar question, is biological configuration space sufficiently connected with respect to the blind watch maker?


3) Human Intelligence itself appears to employ a kind of evolution; it makes a series of incremental adjustments to its concepts, concepts that are either selected or rejected; search, reject and select is the general evolutionary algorithm.

Sunday, November 29, 2009

Serpentine Logic

That devolved legless reptile is doing its damnedest to stay in the picture

Interesting is this post on Uncommon Descent concerning ID guru William Dembski’s book “The End of Christianity, Finding a Good God in an Evil World”. This is what Dembski says about the book.

My book attempts to resolve how the Fall of Adam could be responsible for all evil in the world, both moral and natural IF the earth is old and thus IF a fossil record that bespeaks violence among organisms predates the temporal occurrence of the Fall. My resolution is to argue that just as the salvation of Christ purchased at the Cross acts forward as well as backward in time (the Old Testament saints were saved in virtue of the Cross), so too the effects of the Fall can go backward in time. Showing how this could happen requires extensive argument and is the main subject of the book. As for my title, “End of Christianity” involves a play on words – “end” can refer to cessation or demise; but it can also refer to goal or purpose. I mean the latter, as the subtitle makes clear: Finding a Good God in an Evil World.

The link to the interview with Dembski also worth looking at.

Some comments

1. Although he minces his words and hedges (Not surprisingly given his particular Christian sub-culture) my guess is that Dembski is an “Old Earth” believer; otherwise his attempt to explain pre-fall evil as a “retroactive” result of an anthropocentric fall would seem rather futile.(Note: If the act of God in Christ is a powerful symbolic declaration of covenant by God and revealed in the fullness of time, then its ability to dispense grace retroactively is less enigmatic, whereas I can make little sense of a retroactive fall in this light)

2. In line with Young Earth Evangelicalism’s world view Dembski assumes evil and suffering to be largely sourced anthropocentrically. And yet the serpent in Genesis 3 could be construed as a symbol of the presence of extra-human evil. The person(s) who penned Genesis 3, as is the wont of arcadian folk, would be ever aware of the lurking presence of evil in an unseen world. There is also the question over the interpretation of the enigmatic Romans 8:18-23 , a passage which hints that the cause of evil and suffering is not quite so crisp and clear cut as traditional Young Earth Evangelicalism would have it.

3. The problem Dembski is addressing comes out of contemporary Young Earth Evangelicalism’s world view and is thus very pressing in his own intellectual circles. Some of us, such as myself, who feel that there are deeper mysteries surrounding the source of suffering and evil have a less pressing need to explain pre-fall suffering. According to my concordance the word “good” of Genesis 1 is not quite the same as the word perfect (~ completion). I wonder what Dembski teaches his seminary students?

I have been a close witness of evangelical philosophy for over thirty years and its sometimes cursory and shallow treatment of suffering and evil is not its only short fall. So much about standard evangelical explanations fail to make sense of the world around us, not to mention its inconsonance with aspects of my own personal experience. It’s no wonder that when pressed the fallback position of many evangelicals is fideism: Viz: “Faith is not always logical, if it was it would not be faith.” In the fideist mind the ability of faith to accommodate mystery is conflated with the ability to swallow illogicality. But I’ll hand it to William Dembski, he is brave guy (and a nice guy as far as I can tell) and he is making a very valiant attempt at being be logical. Pity about some of his fellow pilgrims.

Saturday, November 28, 2009

World Beater

I must admit that I share some of Denise O’ Leary’s doubts about the democratic intentions of the “new atheists”. Stalinist atheism is to liberal atheism as Inquisitional Christianity is to liberal Christianity. Is “new atheism” a Stalinist form of atheism? Well, as far as I’m concerned that question remains to be answered. Given the insults, accusations of “scientific heresy” and unreason screamed out by some “new atheists” (even to fellow atheists) I have to say that I’m not at all confident that democracy would be in safe pair of hands should they gain exclusive political control. They are reminiscent of the revolutionary Marxists: “If you are not for us you are against us; we will see to your ‘re-education’ after the revolution.” The best complexion I can place on it is that in the wider social economy Stalinist atheism and Inquisitional Christianity are a reaction of one to the other and that somehow they cancel out. Hopefully.

Sunday, November 22, 2009

Left Out in the Cold

This post by William Dembski may give some indication of the social circles he is connected with. I’m not going to comment on the issue of what Dembski calls anthropogenic global warming (except to say notice “anthropogenic” here – does that mean that global warming is less the issue than is the cause of global warming?), but the post does provide insight into the human and political dimension of the ID/evolution debate.

Ever the malaise of those who feel unjustly marginalised, I get a whiff of conspiracy theory when Dembski quotes from an article on global warming found in a newsweb of the religious right.

“It is not hard to think up ways to scare people into handing over more of their cash via taxes, insurance, inflation, etc. You just have to think of the right nightmare, publicize it, politicize it, turn it into curriculum, and sooner or later people will gladly hand you loads of moola with tears in their eyes.”


This right wing newsweb promotes “What the bible says about creation” over and against what “science says”. Given this polarised notion of "bible vs. science" I think we can guess what their views are. (see here)

And yet I sympathize with Dembski when he says:

"How, in other words, to create a scientific climate in which anyone who disagrees with AGW [anthropogentic global warming] can be written off as a crank, whose views do not have a scrap of authority.” Where have I heard that sort of thing before?"

It’s no surprise that Dembski identifies with the anti-anthropogentic global warming lobby given how he has been treated over his evolution/ID work.



William Dembski and friends have been left out in the cold - no wonder they don't believe in Global Warming

Friday, November 20, 2009

So Long Pilgrim

This post on UD publicizes a new book called “Should Christians Embrace Evolution”. The post says:

Believers in a God-guided Darwinism are preaching that Darwinism is a fact and that the Bible can be reconciled with it. This new book comprehensively refutes both ideas. Far from necessary, theistic evolution is both bad theology and bad science.

Firstly: It is impossible for evolution to be “a fact” in the sense that gravitational theory is “a fact”. Gravitational theory deals with a relatively simple mathematical object, whereas evolution, if it has occurred, may be computationally irreducible. So to be fair, all due allowance must be made for evolution’s level of epistemological intractability; just as one must make epistemological allowance for something as complex as deity. I would classify evolution less “bad science” than “difficult science”. And let's not forget that we must also make all due allowance for the scientific difficulties ID theory itself faces. For there is a very wide range of theories held by ID theorists; from those who accept an evolution of sorts from common descent to those who believe God "poofed" the whole cosmic show, as seen, into existence 6000 years ago. So it is clear that ID theory has at least as great epistemological difficulties as evolution. People who live in glass houses should not throw stones.

Secondly: What about bad theology? Well, the prodigal son went through a process of incremental learning and development and yet there came the time when he was self aware enough to willfully reject the right way. Moreover, the environment in which he was reared was good but not perfect.

Looking through the list of book contributors I get a distinct whiff of raw evangelicalism here and there. This does suggest that I am unlikely to be comfortable with the religious culture of many contemporary ID supporters. And yet ironically I accept the general notion of design; namely that the elemental must have its source in a-priori complexity; I personally identify that a-priori complexity with a personal God. But it seems to me that for most contemporary ID supporters the bear and general notion of design is less important to them than a very vocal and specific anti-evolutionism. In fact it even seems that theism is less important to them than anti-evolutionism. In this connection I wasn’t aware that Steve Fuller is a theist, anymore than I was aware that David Belinsky is a theist, but these people connect very well with contemporary ID theory’s prevalent culture of de-facto anti-evolutionism. As I endeavor to be a non-aligned party in the ID/evolution debate I find I cannot support the raw emotionalism and an anti-evolutionism that is so often bound up with the vested interests and crowd dynamics of contemporary ID culture. In many respects ID culture has burnt its boats. It has passed the point of no return.

As I explore links between evolution, quantum theory, algorithmics, intelligence and learning it looks as though I’m on a very different pilgrimage to many contemporary ID theorists.

Friday, November 13, 2009

On Uncommon Descent

I’m currently having a very fruitful discussion with an Uncommon Descent poster. See here.

Thursday, November 12, 2009

Nothing is Simple. Something is ... well, Something Else!

I recently heard an argument to the effect that there is no problem in the universe coming from nothing because current cosmological evidence points to a flat universe; in such a universe positive energy is completely cancelled by negative gravitational energy and therefore the universe could have come from nothing. (I suppose a similar point could be made about cancelling electric charges)

Now, I think this mathematical jiggery-pokery was all rather tongue in cheek and not meant to be taken seriously. So with my tongue also firmly in my cheek let me engage in a little mathematical sleight of hand.

Let me define a quantity that I call the quadratic field operator. This operator takes a differentiated field of energy and sums up the squares of the energy components. If we start with absolutely nothing at all then clearly the quadratic field operator returns zero (=nothing). No problem. But as soon as there is any differentiation of energy into a field of cancelling negative and positive energy components the quadratic field operator quietly slips past the outer parentheses into the field of energy and hey presto you end up with a finite quantity (=something) Thus, if absolutely nothing suddenly resolves itself into a field of canceling energy terms the quadratic field operator reveals that something has come from nothing. Tough luck! We are back to square one! (or should that be square zero?)

Of course, all this by passes some rather abstruse philosophical questions: Do the “laws of physics” have any real meaning unless they are reified on an existing material ontology? What comes first; the laws of physics or the stuff whose patterns of behavior those laws describe? Does the description of an ontology proceed the ontology it describes? Do ethereal immaterial laws actually bring something into existence? Can existence be treated as a mere property that may be attributed or unattributed to an entity, or is it a fundamentally different category all together? I seem to remember that Anselm’s ontological argument foundered on this latter question!

Monday, November 02, 2009

Darwin Bicententary Part 28: The Mystery of Life’s Origin, Chapter 8



Continuing my consideration of the three online chapters of Thaxton, Bradley and Olson’s book “The Mystery of Life’s Origin”

As far as chapter 8 is concerned I can cut a long story short; Thaxton, Bradley and Olson calculate the Gibbs free energy associated with the formation of the information bearing bio-polymers essential for life. They come to the conclusion that this free energy is positive - as one would expect from a process involving a polymerization that uses energy to form the required molecular bounds and also to work against entropy in order to secure the right molecular configuration. I can see no fundamental flaw or controversy in TB&O’s conservative estimates. At the end of the chapter 7 they draw the conclusion that life’s information bearing bio-polymers are not going to form in near equilibrium conditions. However, they make this final comment:

In the next chapter we will consider various theoretical models attempting to show how energy flow through the system can be useful in doing the work quantified in this chapter for the polymerization of DNA and protein. Finally, we will examine experimental efforts to accomplish biomacromolecule synthesis.

Thus, TB&O leave us with the same cliff hanger that I felt I was left with in the previous chapter: If life has any chance of forming naturally that chance will only be found, if it is to be found at all, in disequilibrium conditions where there is an energy flowing through a system’s subsystems. Thus, it may be possible that this energy flow can be used to produce organic structures with high Gibbs energies …. or perhaps not.

***

Remark
Chapter 8 has a more promising view of the nature of living configurations than the previous chapter. In the previous chapter (as I said) TB&O seem to conflate complexity and order, but in this chapter they tell us (rightly I believe) that living structures are neither highly ordered nor highly disordered and thus classify as something else. They note that the aperiodicity of bio-polymers resemble the aperiodicity of random arrangements of bases, both of which are very different from the high order of crystals, but TB&O then go on to tell us just what distinguishes bio-polymers from random polymers:

Only certain sequences of letters correspond to sentences, and only certain sequences of sentences correspond to paragraphs, etc. In the same way only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions. Thus, informational macro-molecules may be described as being and in a specified sequence. Orgel notes: “Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity”

However, in my view TB&O are still presenting a rather obscure answer to the question of just where living configurations stand in relation to order and disorder, referring only to a mysterious property they call “specified complexity”. To try and clarify this they say:

Yockey and Wickens develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.

In referring to a specific set of spatio-temporal and functional relationships among their parts TB&O may be trying to tell us that informational macromolecules are only one component taken from a wider system, a system that as a whole has a configuration that is both highly ordered and yet highly complex. Taking one component out of that system such as an informational macro molecule, yields something that in a standalone sense is highly disordered and yet when placed within the context of the whole working system it is seen to be part of a highly organized system. Thus “specified complexity” is not a vitalistic property intrinsic to a sequence, but a property bestowed extrinsically by virtue of the sequence having a well-defined and orderly role in a much larger spatio-temporal configuration that certainly does not classify as disordered. “Specifity” is itself a configurational property that comes out of the configurational relations between an informational macromolecule and the spatio-temporal configuration in which it is embedded. Hence, we are back to where we were in my post on chapter 7: Biological structures are configurations in time and space, but they are neither highly ordered or nor highly disordered – they inhabit the space between the extremes of order and disorder. Perhaps the terms “complex organization” best describe these configurations. But although crystals are far too an elementary form of organisation to make a close comparison with life, there is, in spite of what TB&O say, a close relation between the process of crystallization and the conjectured process of evolution. But more about that another time.


Sunday, November 01, 2009

The Mystery Deepens

I was fascinated to read this post on UD where William Dembski appears to disassociate himself from Young Earth Creationism (At least that’s how I read it). He suggests that attempts to conflate ID and YEC are motivated by a desire to discredit ID by associating it with YEC. Surely Dembski isn’t hinting that YEC is a bad thing to be associated with?

Once again Demsbki stresses that “ID, per definitionem, is the study of patterns in nature that are best explained as the product of intelligence.”

He also tells us:

It [ID] rests on two pillars: (1) that the activity of intelligent agents is sometimes detectible and (2) that nature may exhibit evidence of intelligent activity. How anyone gets young-earth creationism from this is a mystery.

I may agree with that, but since evolution would also classify as a pattern of nature (assuming nature exhibits such a pattern) then using the very criteria employed by ID theorists for identifying the work of intelligence doesn’t it also follow that the very peculiar and customized patterns required by evolution are arguably also the work of intelligence? So why doesn’t Dembski give more credence to evolution as at least a candidate pattern displaying evidence of Intelligent Design? What is different about evolution in the eyes of the ID/YEC axis? Is it because both hanker after an “in yer face” supernaturalism? How anyone gets antitheism from evolution is a mystery.



Characters of the Wild Web No. 18: Billy the DembskID - “The man the authorities came to blame for something that he never done (sic )....but one time he could-a been the champion of the world"

Thursday, October 22, 2009

The Glasshouse Effect


Just a couple of comments on two blog posts I want to make a note of:

Uncommon Descent.
This post on Uncommon Descent made me laugh. I can understand UD’s feeling that it’s one law for them and another for established scientists! The papers the post refers to can be found here and here. What do I file these papers under? Hoax Physics, Crank Physics or just “Edgey”? Whatever; it appears that they have already been classified by their peer group as professional physics.

Sandwalk
This looks to be quite a promising departure on Sandwalk. It is a link to Jim Lippard’s blog. Lippard is an atheist, but he seems to be asking those reflexive epistemological questions that we should all be asking: What is science? How do we know things? Why do we believe? Etc. Given that questions over the primary ontology of the cosmos is a highly speculative area,bound up with philosophy and epistemology, it is only natural that people attempting to arrive at preemptive proposals/views in this area will tend to come up with different speculations ranging from theism, simulated universes, multiverses, self organizing principles, postmodern nihilism etc. Where things go wrong, I feel, is when particular proposals become the jealously guarded property of some subcommunity which then, with religious zeal, embarks on the hard oversell of their views.* In particular, I cannot go along with Christians who take the position that atheists must all have bad consciences because they are willfully holding out against some Gnostic or fideist revelation. Atheism, in my view, is just one of a set of plausible constructions given the human predicament. Conversely, I must set against this those atheists who frequently accuse others of irrationality and yet themselves appear to be unself-critical, and lacking in both reflexivity and self-scepticism.

But the good news is that Lippard appears to be asking the right kind of questions, questions that both hardened atheists and religionists need to be asking themselves. A step in the right direction I feel.


Addendum: I notice that Larry Moran of Sandwalk says he hasn't the time to debate Lippard. It may be just me but haven't I noticed before that he's stayed well clear when things start to get edgy? The "fringe" isn't for him and I suppose that we need people like him to keep things anchored. But there is more to scepticism than bread and butter science - and "bah humbug".

* Footnote
To be fair this is probably an effect of communities polarizing against one another; if one community makes signals indicating what they think of as their spiritual/intellectual superiority in comparison with another community, the latter community, tit for tat, will respond in kind.

Monday, October 19, 2009

The Neurological Problem

The Following is a comment that I intended to put on James Knight’s bog here but for some reason I was denied the paste option, so I’ll have to post it here instead:

I think this is what they call the “hard problem” – that is, just how does the subjective first person perspective of “qualia” marry with the (putatively) objective third person perspective? The latter perspective only recognizes first person mental processes as neuronal activity. In fact when the third person gets up close to the first person he only ever sees neuronal activity.

This issue, especially amongst materialists, is often cast in a mold which assumes the third person perspective is somehow more fundamental and “real” than the seemingly private world of first person qualia. This position is apt to overlook the fact that third person perspectives also necessarily involve a sentient agent, an agent that does the observing and reports in third person language; it’s a position that vaguely resembles the N-1 test for egocentricity or those fly on the wall documentaries where the necessary presence of the camera man is all too easily forgotten. In the final analysis the third person perspective entails an unacknowledged first person perspective. This failure to count the third person encourages the construction of a materialist ontology that uncritcially by passes the implicit assumption of sentience entailed by the observations needed to construct a third person narrative.

As both first and third person perspectives necessarily involve persons, the “hard problem” in actual fact seems to amount to this: Just how do the qualia of the first person register themselves in the qualia of the third person? That is, what does the third person observe (i.e. experience) when he attempts to investigate the first person's qualia? The answer to that is that the third person, when he takes a close look, only ever observes these qualia as neurological activity. How then do the first person qualia map to this activity?

Regardless of the question of just whether or not the third person perspective reveals a fundamental and primary materialist ontology behind qualia, it seems to me that the human perspective is forever trapped in the dichotomy of two stories; the “I-story” of first person qualia and the “It-story” of third person observations of other people; this latter story is told in terms of human biology.

I like the proposal that there is point by point conformity between these two stories; that is, for every first person experience of “qualia” there is a corresponding event in the third person world of “materials”. (However, I wouldn’t want oversell this idea, or be too committed to it)

With reference to the latter proposal it is worth recalling that neurologically speaking the mind probably has a chaotic dynamic, and thus is sensitive to the butterfly effect. Therefore even though we may eventually come to tell and understand the principles of the "I-story" in terms of neurogical activity (as we might fancy we understand the principles of weather) it looks as though under any circumstance minds, like the weather, will forever generate effects of which we will have no clue from whence they came.

Friday, October 16, 2009

A Proposed Problem Solution

This post concerns a little computational conundrum that I been pondering for some time. I now think I may have a handle on the solution.

Firstly some preamble.

Let’s assume that we can represent a general computation using a model similar to one I suggested in this post as follows:

Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.

Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.

The essential idea here was that algorithmic change involves a path through a network of states separated by some kind of increment; in this case a bit flip. Since I wrote the above I have stayed with the general idea of a network of binary patterns linked by some form of increment, with algorithmic change effectively being a route through this network. However, in the above model the patterns are networked together by bit flipping – this seems a fairly natural way of linking the binary patterns, but as I have thought about it I have come understand that the method of linking the patterns depends on the computing model used; in fact the computing model also seems to effect the set of patterns actually allowed and even just how the “cells” that host the binary bits of the pattern are sequenced. Once these network variables are specified it is only then that we can start considering just what the computing model thus defined can be programmed to do. In this network model of computation I envisage both the running of the computation engine, its program and its workings in memory to be uniquely represented by network patterns; thus part of a particular pattern will represent the computation engine’s state and its program.

So how does this pan out for say a Turing machine? The configurational state of the Turing machine’s tape in terms of allowed characters seems to have no limitation – all configurations of characters are allowed. But it seems that not every network transition is allowed. If we take into account the fact that we must represent the position of the machine on the tape as part of a network pattern, then it is clear that arbitrary pattern transitions can’t be made; a character cell on the output tape cannot be changed if the machine is not at that position. There will also be some limitations on the actual patterns allowed, for depending on how we are to code the machine’s programs, we will expect syntax limitations on these programs. So a Turing computing model puts some constraints on both the possible network configurations and the network transitions that can be carried out.

Another point to make here is that what classifies as an incremental change in one computer model may involve a whole sequence of changes in another. Compare for example a Turing model with a “counter program” computing model. The Turing model is fairly close to my bit flipping model; changes in the tape involve simple character swaps. However, in a counter program the variables are numbers and the increments are plus or minus 1. This means that the character patterns of two numbers separated by the increment of 1 can look very different; e.g. 9999 + 1 = 10000. Thus, in counter programs two patterns separated by an increment of 1 or -1 may have more than a one character difference.

So what is my problem and how does the forgoing preamble help?

The problem I have pondered for some time (for a few years actually!) arises from the fact that there exists a class of small/simple programs that can in relatively quick time generate disordered patterns. (Wolfram’s cellular automata pictures are an example). It follows therefore that there is a “fast time” map from members of this class of a small programs to members of the class of disordered patterns.* But the class of disordered patterns is far, far greater than the number of short programs. Hence we conclude that only a tiny subset of disordered patterns map to “fast time” small programs. Now here’s the problem: Why should some disordered patterns be so favoured? The link that a relatively small class of disordered patterns have to simple mathematics grates against my feeling that across the class of disordered patterns there will be symmetry and no disordered pattern should be specially favoured.

My current proposed solution to this intuitive problem is as follows. The computing model is a variable and hence it is very likely that different computing models will favour different sets of disordered patterns with a map to fast time generation algorithms. In fact my guess is that the computing model variable has such a large degree of freedom that it is possible to find a model that will generate any configuration in fast time given the right model. The computing model variable thus restores symmetry over the class of disordered configurations. But in order to cover the whole class of disordered configurations with fast time maps where does the enormous variability of the computing model come from? It seems to come from several sources: One, as we have seen above, is the way the computing model wires the network of increments. Another degree of freedom is found in the sequencing off the patterns in the network: A Turing machine, via its quasi continuous movements up and down a tape of cells, effectively imposes a particular order on the cells of the output. It is possible to imagine another Turing machine that has an entirely different cell ordering and thus it would access an entirely different set of disordered configurations in fast time. I suggest, then, that the possibilities of the computing model look to be enough to cover the whole class of disordered configurations with fast time maps, thus restoring the symmetry I expect.

* Footnote
The “Fast Time” stipulation is important. Some single simple programs can, if given enough time, eventually generate any selected pattern and thus on this unrestricted basis a simple program maps to all output configurations.

Thursday, October 15, 2009

Does Alpha's Poll Exist?


Characters of the Wild Web: Myers’ raiders ransacking the Alpha Course web site.

I notice that PZ Myers' raiders are out and about again causing gratuitousness grief. This time they have completely wrecked an Alpha course poll! (see http://scienceblogs.com/pharyngula/2009/10/alpha_pollalready_demolished.php) Here's the Alpha poll page last time I looked:

Does Alpha's poll exist? No, thanks to Myers' invasion.

It think it's going to be pretty "Nicky Grumble" at Alpha HQ when the good Rev sees this. PZ had better watch out, Rev Gumbel has quite a few troopers behind him as well - Just how many supporters have been through Alpha courses? (I'm not one of them I must add)

Tuesday, October 13, 2009

The End of Puzzlement?

The juggling act in which William Dembski has to engage in order to maintain popular appeal amongst a broad church ID/YEC movement may be evidenced in his UD post advertising his latest book “The End of Christianity”. I’ve always been rather puzzled by his position and the following quote taken from the UD post only compounds my puzzles: “Even though argument in this book is compatible with both intelligent design and theistic evolution, it helps bring clarity to the controversy over design and evolution.”

Dembski gives every impression of being a very nice guy, but over in America this evolution/ID debate sometimes resembles a kind of football culture with star players getting money and accolades, and supporters fanatically sold out to their respective sides. I’m unsure about whether or not Dembski’s position is compromised by having an adoring “fan club” and, who knows, wealthy benefactors as well. Is fan club driven science a good atmosphere for a dispassionate perspective on the cosmos, a perspective that is far healthier when one is prepared to face one’s demons rather than one's fans? Will I buy the book? I’ll have to think about it.

Mr. Deism Speaks Out.

In this little piece (which includes a YouTube video) by PZ Myers, two points are worth noting:

a) The reference to evolution as a cruel and inefficient process

b) That evolution is a process which once started means that deity doesn’t need to expend further effort or even lift a finger to assist.

The irony is that many an ID/YEC pundit would agree on both points! That’s why atheist’s love “natural” evolution and ID/YEC’s hate it.

Wednesday, October 07, 2009

Darwin Bicentenary Part 27: The Mystery of Life’s Origin (Chapter 7)



I have been busy looking at the three online chapters of “The Mystery of Life’s Origin”, a book written by anti-evolutionists Thaxton, Bradley and Olson. The first of these chapters (chapter 7) introduces itself thus:

It is widely held that in the physical sciences the laws of thermodynamics have had a unifying effect similar to that of the theory of evolution in the biological sciences. What is intriguing is that the predictions of one seem to contradict the predictions of the other. The second law of thermodynamics suggests a progression from order to disorder, from complexity to simplicity, in the physical universe. Yet biological evolution involves a hierarchical progression to increasingly complex forms of living systems, seemingly in contradiction to the second law of thermodynamics. Whether this discrepancy between the two theories is only apparent or real is the question to be considered in the next three chapters.

In another passage we read:

It is often noted that the second law indicates that nature tends to go from order to disorder, from complexity to simplicity. If the most random arrangement of energy is a uniform distribution, then the present arrangement of the energy in the universe is nonrandom, since some matter is very rich in chemical energy, some in thermal energy, etc., and other matter is very poor in these kinds of energy. In a similar way, the arrangements of mass in the universe tend to go from order to disorder due to the random motion on an atomic scale produced by thermal energy. The diffusional processes in the solid, liquid, or gaseous states are examples of increasing entropy due to random atomic movements. Thus, increasing entropy in a system corresponds to increasingly random arrangements of mass and/or energy.

Thus far Thaxton, Bradley and Olson hint at a possible conflict between evolution and the second law of thermodynamics. At this stage, however, TB&O don’t claim an outright contradiction but rather an intuitive contradiction that needs to be investigated. Their caution is justified because there is a basic incoherence in TB&O’s statement of the apparent problem. They suggest a parallel between order and complexity, but the fact is that the most highly ordered systems, like say the periodicity one finds in a crystal, are the very antithesis of complexity. They also suggest a parallel between disorder and simplicity whereas, in fact, highly disordered systems, such as random sequences, are in one sense extremely complex rather than simple; their complexity is such that in the overwhelming number of cases they don’t submit to a description via some succinct and relatively simple mathematical scheme. There, is therefore, inconsonance in TB&O linking order to complexity and disorder to simplicity.

What may be confusing TB&O is the fact that highly disordered systems are statistically simple but not configurationally simple: Statistically speaking a very disordered system may be characterized with a few macroscopic parameters whose values derive from means and frequencies, whereas to pin a disordered system down to an exact configuration requires, in most cases, a lot complex of data. Where TB&O seem to fall down in chapter 7 is that they fail to make it clear that living structures are neither very simple nor highly complex, neither highly ordered nor highly disordered but are, in fact, configurations that lie somewhere in between the extremes of order and disorder, simplicity and complexity. As such the characterization of living things is neither amenable to simple mathematical description, nor statistical description.

That TB&O have misrepresented the order/disorder spectrum as a polarity between complexity and simplicity helps ease through their suggestion of an intuitive contradiction between evolution and the second law. Because they have identified high order with high complexity and because the second law implies a move away from order, then in the minds of TB&O it follows that the second law must also entail a move away from structural complexity, thus ruling out the development of the complex living structures as a product of thermodynamics. If TB&O were aware of the intermediate status of living structures in the configurational spectrum they may realize that the situation is more subtle than their mismanagement of the categories suggests.

Another nuance that doesn’t come out in chapter 7 of TB&O’s book is that disorder or entropy, as it is defined in physics, is a parameter that is not a good measure of the blend of complexity and organization we find in organic structures. In physics disorder is defined as the number of possible microscopic configurations consistent with a macroscopic condition: For example compare two macroscopic objects such as a crystal and an organism. A crystal on the atomic level is a highly organized structure and as such it follows that there are relative few ways the particles of a crystal can be rearranged and still give us a crystal. On the other hand a living structure has a far greater scope for structural variety than that of a crystal and it therefore follows that the number of ways of rearranging an organism’s particles without disrupting the abstract structure of the organism is much greater than crystal. Therefore, using the concept of disorder as it is defined in physics we conclude that an organism has a far greater disorder than a crystal. Given the middling disorder of organisms as measured by physics one might then be lead to conclude that in the slow run down of the universe from high order to low order the universe will naturally run through the intermediate disorder of organic forms. Clearly there is something wrong with TB&O’s presentation of the thermodynamic case against evolution. (To be fair I ought to acknowledge that ID theorist Trevors and, in this paper, does demonstrate an appreciation of the intermediate place occupied by the structures of life)

Consider also this quote from TB&O:

The second law of thermodynamics says that the entropy of the universe (or any isolated system therein) is increasing; i.e., the energy of the universe is becoming more uniformly distributed.

Precisely; when compared to the high concentration of energy in a star, the energy distribution in living structures is part of a more uniform distribution of the suns highly concentrated energy and therefore as far as physics is concerned it represents a more degraded form of energy. Admittedly, the thought that physics’ rather crude measure of disorder implies that living structures are an intermediate macrostate in the thermodynamic run down of the high concentration of energy found in a star is counter intuitive, but TB&O have so far failed to give a coherent statement of the thermodynamic problem in order to explain this seeming anomaly.

* * *
The second law of thermodynamics works because cosmic systems are walking randomly across the possible states open to them. Given this assumed random walk the overall trend, then, will be for those systems to most likely move toward categories of state with greater statistical weight. It follows, therefore, that classes of states which have a large statistical weight are, as time progresses, the most likely to be occupied by the system; for example heat is likely to move from a hotter region to a colder region because a uniform distribution of heat can be realized with a far greater number of microscopic arrangements than an heterogeneous distribution. A crucial feature of this physical model is the space of possible states. This is determined by the laws of physics which limit the number of possible states available. Through their sharply limiting the number of available states the laws of physics effectively impose an order on the cosmos. Because these laws transcend thermodynamics the order they impose on the cosmos is not subject to thermodynamic run down.


As I have indicated in this post, it is conceivable that the laws of physics are so restrictive, in fact, that they eliminate enough possible states to considerably enhance the relative statistical weight of living structures at certain junctures in the evolution of the cosmos. In effect these laws impose a state “bottle neck”, whereby in the run down to disorder the cosmic system is forced to diffuse through this bottle neck - a place where living macrostates effectively have a realistic probability of existence because their relative statistical weight is enhanced by the laws of physics. However, having said that I would certainly accept that ID theorists should challenge the existence of this state bottle-neck; it is by no means obvious that physics implies such a bottle-neck. But what I do not accept is the anti-evolutionist’s canard that the second law of thermodynamics in and of itself is sufficient to rule out a realistic probability of evolution. If the anti-evolution lobby wants to clinch their case they need to show that physics does not apply sufficient mathematical constraint on the space of possibilities to enhance the relative statistical weight of organic structures at certain stages in the run down to maximum disorder.

I agree with TB&O when they say:

There is another way to view entropy. The entropy of a system is a measure of the probability of a given arrangement of mass and energy within it. A statistical thermodynamic approach can be used to further quantify the system entropy. High entropy corresponds to high probability. As a random arrangement is highly probable, it would also be characterized by a large entropy. On the other hand, a highly ordered arrangement, being less probable, would represent a lower entropy configuration. The second law would tell us then that events which increase the entropy of the system require a change from more order to less order, or from less-random states to more-random states.

But the problem here is what TB&O leave unsaid. There is no quibble with the assertion that the second law of thermodynamics ensures a migration of an isolated system to its most probable class of state, as determined by statistical weightings. But, and this what TB&O don’t acknowledge, the details of that migration are sensitive to the constraints of physics and those constraints apply a transcendent order that is not subject to thermodynamic decay. These constraints may (or may not) considerably enhance, via a state bottle-neck, the probability of the formation of living structures as an intermediate state of disorder in the run down to the most probable state.

That TB&O fail to understand the essential issue is indicated from the following:

Clearly the emergence of order of any kind in an isolated system is not possible. The second law of thermodynamics says that an isolated system always moves in the direction of maximum entropy and, therefore, disorder.

The intent of this statement, presumably, is to dismiss the possibility of the evolution of life on the basis of the second law of thermodynamics. In TB&O’s minds their intuitive conclusion is safe because they have wrongly pushed living structures to the extreme ordered end of the order-disorder spectrum. They see no prospect of life arising because thermodynamic change is always away from order and in TB&O’s flawed opinion it must therefore always be away from the states of life.

Despite TB&O’s initial caution in pushing their belief it is clear that they think their conclusion is safe before they have demonstrated it:

Roger Caillois has recently drawn this conclusion in saying, "Clausius and Darwin cannot both be right."3 This prediction of classical thermodynamics has, however, merely set the stage for refined efforts to understand life's origin.

But to be fair to TB&O they don’t entirely dismiss evolution and they are prepared to consider those refined efforts to understand life’s origin. They engage in some uncontentious thermodynamic analysis showing that the sub systems in a system that is far from equilibrium may decrease in entropy. Hence the only hope that evolution has, they concede, is in the area of systems far from equilibrium. In this TB&O are right. Evolution, if it is to be found at all, will only be found in non-equilibrium systems; that is, systems that diffuse through a conjectured state bottle neck where the squeeze on the available states ensures that the relative statistical weight of living structures is enhanced, thus imbuing them with enhanced probability. But TB&O warn:

Nevertheless, one cannot simply dismiss the problem of the origin of organization and complexity in biological systems by a vague appeal to open-system non-equilibrium thermodynamics. The mechanisms responsible for the emergence and maintenance of coherent (organized) states must be defined.

On that point I certainly agree with TB&O: The appeal to non-equilibrium thermodynamics is vague and general and the mechanisms responsible for the emergence and maintenance of coherent (organized) states must be defined. In other words, is there any real evidence for a state bottle neck, a bottle neck that must be the general mechanism behind evolution? In my next installment I hope to see what TB&O’s chapter 8 has waiting for us on this important question.