Wednesday, February 24, 2010

Atheology

In this post atheist Larry Moran criticizes attempts to “prove God” (sic; how can we expect to “prove God” if Larry can’t even expect to “prove evolution”?) from the appearance of design. In his opinion life shows signs of not being designed: He is probably thinking of some of those biological structures that look as though they have been cobbled together as a result of the accumulation of historical accidents. The end product is the kind of legacy architecture one sees in old towns where the next development is constrained by and has to work from the fiat of what has gone before. (If you ask Larry he could probably reel off a whole list of biological examples) In Larry Moran’s mind this concept of haphazard accumulative development is set against the notion of the intelligent planner who starts from a blank sheet and can therefore design a much more direct solution to a problem, a solution unencumbered by past random legacies.

There is, needless to say, a hidden counterfactual theology in the atheism here; namely the assumption that God is a kind of technical whiz who sits down like a human engineer with a virgin drawing board in front of Him and designs his best solutions. (Ironically many anti-evolutionists conceive a similar concept of designer). But perhaps God is a story teller first and an engineer second? Clearly ours is a story, a cosmic story of change, evolution and contingency snatched from the platonic limbo land of possibility, that can and has been told. Perhaps like a human author Deity has sat down not in front of the engineer’s drawing board, but rather in an armchair beside a fire and told our story. If God is God, infinite and inscrutable, how can we be so sure (and naïve) about His motives?

But whatever; at the end of the day we are always left with the Logical Hiatus of having something for nothing; why is there something rather than nothing? Where, if it exists, is the logical necessity to support the apparent contingency of our cosmos? So, in a more abstract sense than we find in the engineering metaphor we seem to have a design question, a question of, as Paul Davies puts it in the “Cosmic Jackpot”: Why this universe and not another? Is it all just random? But what is randomness other than a mathematical description of a particular class of pattern, a pattern that creaks under a huge burden of complex brute contingency. The big question then is who or what sources such complex patterns? Or is the source a hidden ontology that somehow reifies our notion of “chance” in the sense of being a source that can make choices mindlessly?* But to envisage such a source of mindless purposelessness working behind a facade of pattern is itself a perverse kind of theology; an “atheology” that makes the ontology of chance into a god and stultifies further questions because it believes there is neither point nor meaning in asking them. This is the theology of the absurdity of it all.

Footnote
* What does it mean to make choices without mind? It can’t be randomness because randomness is to do with pattern description and as such makes no necessary allusion to the mechanism of “choice” that sources the pattern. As human minds we can only empathize with mind as the source of choice, so it seems that an ontology of mindless “chance choices” is forever utterly beyond our understanding; we can no more understand what it is like to be a mindless “chance source” than we can understand what it is like to be an ant.

Friday, February 12, 2010

Thermodynamics and Evolution – Again.

Is the relative statistical weight of life containing states great enough at some point in the history of the universe to give those states a realistic probability?

(See also http://quantumnonlinearity.blogspot.com/2009/05/darwin-bicentenary-part-20-does.html)

Whether or not abiogenesis and evolution are facts, there is one argument that the anti-evolution ID theorists are ill-advised to use: The argument that abiogenesis and evolution violate the second law of thermodynamics. In two posts on Uncommon Descent (here and here) we find Granville Sewell still publicizing this erroneous view. The line of argument used by Sewell is based on the observation that a local increase in order can only occur if negative entropy is transmitted across the boundary of a locality into a system (that’s the equivalent of saying positive entropy is exported out of the system). Sewell identifies this negative entropy (or “negentropy”) with the input of information across the boundary of the system. Therefore in his opinion it follows that some kind of information must be injected into the subsystem from without if the ordered structures of life have any chance of coming into existence. Ergo, since abiogenesis and evolution make no recourse to the input of information (or negentropy) beyond the confines of the Earth it follows that thermodynamics prohibits the formation of organic structures.


The observation that local increases in order must be accompanied by the transmission of negentropy into the locality is, of course, correct, but Sewell seems to hold the erroneous view that this negentropy must be reified as explicit information crossing the boundary into that locality. In Sewell's view this explicit information is the only way of offsetting the extremely small probabilities associated with organic structures. He rightly observers that at the bottom of the second law are probabilities, but he neglects to say that these probabilities are conditional probabilities. True, living structures are such a tiny class of what is possible that it is clear they have a negligible absolute probability of formation. But in reality the development of living things is a conditional probability, a probability conditioned on the given physical regime of the cosmos. Given this regime here is the big question: Is the conditional probability of evolution and abiogenist raised to realistic levels by physical conditions alone? If so then the “information cost” of this high conditional probability must reside in the improbability of the physical regime that supports abiogenesis and evolution.

Now consider two scenarios:

ONE) Firstly crystallization. Speaking in absolute terms probabilities favour crystallization even less than they do biological structures; with respect to the huge mathematical space of platonic configurational possibility the high organization of crystals is very a unrepresentative arrangement of particles; the low statistical weight of crystal configurations, a lot lower, in fact, than even the statistical weight of living structures (which can be realized in enumerable ways) ensures that crystals have a very low absolute probability. However, in real physical terms the relatively tiny statistical weight of the class of crystals is more than compensated for by the constraint imposed by the laws of physics, a constraint which eliminates huge numbers of cases, thus considerably raising the conditional probability of crystallization. For under the right physical conditions crystals easily form: A solution with the right concentration, temperature and pressure results in a bit by bit construction of the crystal as atoms randomly find their place in the crystal lattice and are fixed into position. This considerable increase in local order is offset by the appropriate entropy transactions with the surroundings, thus preserving the overall upward trend in entropy.


There is one important remark about crystallisation that I would want to make here. As I have just said entropy is exported out of the crystal locality thus preserving the second law. Another way of looking at this is to think of negentropy crossing over into the locality, increasing the order of this locality. But in fact if we look at the boundary of the locality closely we find nothing resembling the import across this boundary of instructions telling the atoms how to assemble; the assembly is inherent in the information contained in the laws of physics which constrain the locality sufficiently to render an absolutely improbable configuration very probable.

TWO) Now consider the case of a seed landing in a locality and a plant consequently growing. The growth of the plant constitutes a local increase in order. As in the case of crystallization this local increase in order is offset by the appropriate entropy transactions with the surroundings thus preserving an overall upward trend in entropy. However, this system is very different from crystallization on two counts. Firstly, the plant is not a manifestation of simple order like a crystal; it is of course very ordered but it is a very complex form of order at the same time. Secondly, the elementary entropy transactions with the surroundings are not the only thing that crosses the boundary of the locality: From the outset there is the input of a seed which is a very complex piece of genetic machinery and information. Thus, this system is very much in line with Sewell’s expectation that the locality needs some fundamental informational input before a highly ordered yet very complex structure can be constructed.

****

We have above then two scenarios where there is a local increase in order and yet no violation of the second law of thermodynamics. Both systems represent a generalized form of “crystallisation” in the sense that an ordered structure is built up atom by atom, molecule by molecule, as randomly jiggling particles find a place in the structure and then are locked into place. But, of course, there are obvious differences between these two forms of “crystallization”. For mineral crystallization the “information” present in the local laws of physics is sufficient to “instruct” the process, whereas for the “crystallization” of a living structure the complex machinery in the seed packet is needed. For mineral crystallization the very regular and simple order of the crystal is in absolute terms highly improbable, but that improbability need only be bought at the price of a relatively elementary set of physical equations; a scenario that presents no obvious intuitive problem to us. On the other hand the growth of complex flora requires the laws of physics to be supplemented by the input of the complex structure found in a seed; once again a scenario that presents no serious intuitive problems for us.


In terms of absolute probabilities both mineral crystallization and the growth of an organism (in this case a plant) represent the appearance of highly improbable structures. But in terms of conditional probabilities both processes have high probabilities; for mineral crystallization a high conditional probability is bought at the price of the laws of physics and the given boundary conditions of the physical system. In the case of plant life the boundary conditions are supplemented by the complex seed packet that boot straps the growth of an organism. Taken together the laws of physics and the boundary conditions of these two systems have the effect of putting a very tight constraint on their respective systems, so tight in fact that in spite of the general trend of an overall increase in disorder there is a high (conditional) probability of a very unrepresentative, albeit localized, configuration making an appearance. The constraints on the systems have the effect of removing myriad mathematically possible scenarios so that amongst the remaining class of possibility, the statistical weight associated with the development of localised order is relatively high, thus considerably enhancing the probability of the formation of that local order.


So given these two scenarios, is it at least conceivable that some combination of abiogenesis and evolution can result in the “crystallization” (metaphorically speaking) of living things and yet not violate the second law of thermodynamics? The trouble is that neither of the above two scenarios do exactly what is required of abiogenesis and evolution: Although mineral crystallization only buys order from the laws of physics and simple boundary conditions, the result is a very bland and dead form of order. In contrast the “crystallization” of actual living things, in all cases we observe, supplements the laws of physics with a complex boundary condition in the form of the input of an intricate genetic machine needed to seed the formation of life. If the concept of evolution and abiogenesis are to work as conventionally envisaged then they must work less like the concept of seed growth, and more like crystal growth, where the only information resources are some simple boundary conditions and the laws of physics. The question then is this: Can life originate without the boundary condition of an initial input of complex informational machinery? Can we get ordered complexity from simple laws and algorithms?

If evolution and abiogenesis are the source of living complexity and diversity then on current theories they are a form of generalised “crystallization” resourced only by the inherent information present in some simple starting conditions and the laws of physics. It is this contention that sticks in the gullet of the anti-evolutionists; they do not believe that such is possible. Those anti-evolutionists who have followed the work of William Dembski have conflated Dembski’s otherwise valid conservation of information argument with a conservation of complexity and concluded that the complexity of life can’t come from simplicity. In contrast I hardly need point out that the militant atheists love the idea of complexity coming from simplicity, perhaps for two reasons: 1. The origin of life is then apparently sourced in “mindless simplicity” 2. The notion of complexity coming from simplicity can be extended and with a bit of imagination it might be mooted that complexity can come from simply nothing! Although the idea of getting something for nothing is very suspect it is not true that only complexity comes from complexity; as I have remarked many times in this blog, simple algorithms and laws can generate complexity, but the big question is: Can they generate living complexity?


Crystallisation works because each stage in the crystal’s formation is a stable structure; if the current crystal contains N atoms and is stable, then so is the next crystal structure of N+1 atoms and so on in an inductive way. Each structure in this succession of structures is only separated by an atom or two and thus they effectively form a connected set of stable structures in morphospace; in other words crystals are reducibly complex. This connected object in morphospace is an implication of the laws of physics. These laws act as a constraint removing such an immense class of random possibilities that relative to this much reduced class the statistical weight of an outcome containing crystals is considerably raised. The second law of thermodynamics is an outcome of random thermal agitations ensuring that a system migrates towards macro states with the greatest number of microstates (that is, macro states with the greatest statistical weight); if because of the constraint of physics the macro states with the greatest statistical weight contain localized order then that localized order will actually be favoured. Thus crystal formation does not violate the second law of thermodynamics because the laws of physics eliminate so many mathematically possible micro states, that as the system moves toward the macro state with the greatest statistical weight, it moves toward a system that includes local ordering.

Conventional abiogenesis and evolution do not assume the input of explicit information; rather the information is conjectured to be implicit in the laws of physics. Therefore the theoretical precondition of these processes is similar to that required by crystal formation; namely, a connected set of stable bio-structures in morphospace separated by gaps small enough to be jumped by random thermal agitation and/or random mutations; that is, the theoretical precondition for the “crystallization” of life is that bio-structures are reducibly complex. As for crystal formation this kind of evolution/abiogenesis does not violate the second law of thermodynamics because the physical regime is conjectured to eliminate so many mathematically possible micro states that scenarios where there is a local increase in order occupy a considerably larger relative proportion of the now constricted space of possibilities, thereby much enhancing their probability. If at this point it seems intuitively unlikely that the second law of thermodynamics would allow the formation of such complex ordered structures, we now recall the growth of a plant from a seed: Clearly thermodynamics does not prohibit the growth of a complex organism. As for crystal growth so it is for organic growth: The twin physical constraints of boundary conditions and the laws of physics eliminate so many mathematically possible micro states that the local increase in order entailed by organic growth becomes a considerably larger relative proportion of the now constricted space of possibilities, thereby much enhancing its probability. However, it is clear that plant growth depends on the boundary condition of the initial input of organized complexity in the form of a seed: But then abiogenesis and evolution also depend on an initial input of complexity; namely, a complex arrangement in morphospace whereby stable structures form a connected set, a structure that would have to be entailed by the laws of physics. This arrangement in morphospace serves a similar purpose to the complex genetic mechanisms found in a seed. This invisible mathematical structure (if it exists) tends to confound the anti-evolutionist’s expectation that the information needed to assemble complex objects can only be found reified in complex material objects.


I must qualify the foregoing by admitting that it not at all clear that the conjectured structures in morphospace required by evolution and abiogenesis actually exist, or even can exist. Moreover, it is not at all clear that the laws of physics, as we currently understand them, imply that biological structures are reducibly complex. The anti-evolutionist’s contention that biological structures are in fact irreducibly complex may still be true. But the point I’m making here concerns the second law of thermodynamics and not (ir)reducible complexity. That point is this: If biological morphospace is populated by a set of reducibly complex stable structures (that is structures separated by small increments of change) then as in crystal formation abiogenesis and evolution would not violate the second law. Moreover, the “crystallization” of life would require no explicit information directing the process to pass through the boundary of an evolving locality in order to seed it. The information required for life would, on this conjecture, be implicit in the laws of physics in as much as that those laws entail the required arrangements in morphospace.

That simple laws of physics are information laden and can potentially define something as complex as the arrangement of structures in morphospace is another concept that anti-evolutionists may find difficult accept. As I have remarked before anti-evolutionist’s often wrongly identify simple laws and algorithms with “necessity” and thus they wrongly conclude that equations can’t carry information. Moreover they tend to conflate the concepts of complexity and information. Thus they conclude that simple algorithms and laws can’t generate complexity – this is, of course, wrong as we know that simple algorithms can generate the complexity of fractals and pseudo random sequences. Even if we concede that the anti-evolutionists are right about the irreducible complexity of bio-morphs, one thing remains clear: Whether the layout in bio-morphospace entails either reducible complexity or irreducible complexity it is clear that this space of structures is itself an extremely complex object: therefore either way the physics of our world entails a morphospace with a very complex layout. So even taking on board the anti-evolutionist’s concept of irreducible complexity we find that we have to admit that simple physical laws can generate very complex objects, whether those objects favour or block evolution!

Although it is clear that Granville Sewell’s opinion about the second law of thermodynamics are wrong I would want to concede that abiogeneiss and evolution are subject to a reasonable doubt on basis that as far as I am concerned the questions surrounding (ir)reducible complexity are not settled in my mind. It’s a good thing that I’m not a professional scientist because even this expressed uncertainty is likely to be regarded as the slippery slope to the "scientific heresy" of ID. But as I have remarked before, whatever the arrangement of biological structures in morphospace may be, I have a funny feeling that this arrangement is computationally irreducible; that is, the only way we have of acquiring analytical evidence about that arrangement is to actually carry out a very long computation and there are no other analytical short cuts. Hence, the burden of evidence is thrown back on the actual computation itself; in this case the actual workings of natural prehistory as manifested by the existence or nonexistence of paleontological evidence. If my mathematical intuitions are right then as I am not a paleontologist it doesn’t look as though I can make much further progress on this subject.

Sunday, February 07, 2010

Fuzzy after dinner thoughts one quiet Sunday afternoon.


Brave Knight Sir Roger Penrose doesn’t know which way his Hilbert space vector is going to point next

I am hoping to kick this habit of evolution vs. anti-evilution debate posting, an activity that I have engaged in for nearly two years now: I honestly don’t think I can make much further useful progress on the question. So, I plan to make only two more posts on the debate in order to wind it up. My mind is now turning (back) to other issues.

With the authoritative and excellent help of Sir Roger Penrose’s books I have been pondering the vexed question of the evolution (“evolution” as in “change”, not as in “evilution”) of the quantum mechanical state vector. The following is a resume of my current thinking so far, thinking which I have to admit is rather at the impressionistic sketch stage. (There is no guarantee as to the correctness of the following; after all I’m not a “Sir”. Also, today was one of those days when, as H. G. Wells put it in "The Time Machine", I was experiencing "that luxurious after-dinner atmosphere, when thought runs gracefully free of the trammels of precision")

The orientation of the QM state vector in Hilbert space changes in a smooth and deterministic way until a measuring system from without hits that vector and it then appears to undergo a random discontinuous jump according to probabilities calculated with the “projection hypothesis”. Ostensibly, Hilbert space is Mathematically Isotropic. Moreover, there seems to be a complementary symmetry between position and momentum (or time and energy) and this symmetry is especially clear with relativistic renditions of quantum mechanics. So given these symmetries, on the face of it there seems be no reason why we shouldn’t observe those strange mixtures of state that involve macroscopic objects occupying two positions at once, much as do photons in Young’s slits experiment; for given the isotropy of Hilbert space obviously spatially mixed macroscopic states are no less preferred than spatially unmixed states.

It has to be admitted that decoherence theory may provide an excellent solution to this problem. Entanglement with “hot” macroscopic measuring tools will appear to collapse the wavefunction. Furthermore, thermodynamics and entanglement will ensure that for macroscopic objects some balance between a spread of momenta and distances is maintained, and this perhaps is precisely what we see in the seemingly unambiguous states of the macroscopic world. Decoherence theory still remains a very good candidate explaining the apparent random jumping of the state vector and the preference for certain kinds of macro-state. The big bonuses of this theory are that it preserves frame independence, other symmetries and above all that holy grail of reductionist science: An “in principle” determinism – the closed system. However, it has to be said that at the level of the outermost frame there are no outside “hot” measuring contexts which will help maintain the state vector in some balance between momentum and position.

For reasons I have given here, I am inclined to favour the notion of real discontinuous random jumps in the state vector. Moreover, that the Schrödinger equation is not Lorentz invariant may hint that there is an actual asymmetry between space and momentum. In fact my own attempt at explaining gravity depends on a non-linear quantum equation that makes use of postulated underlying asymmetries in frame that are only partially compensated for by Einstein relativity. Gravity is something that operates in x-space and not in momentum space; hence there is an asymmetry between space and momentum. Thus gravity is implicated as a factor explaining why we only observe spatially unambiguous macro states. Penrose’s ideas about why the state vector jumps randomly may be right: It may jump to ensure that we don’t get quantum superpositions which imply spatially ambiguous macroscopic distributions of matter entailing gravitational energy divergences violating the allowed uncertainty in energy. Such a view entails the end of the reductionist's closed system.

Wednesday, January 27, 2010

Trouble Hunter

Characters of the wild web No 19: Cornelius Hunter trouble shoots evolution. But if you succeed in shooting down evolution, what have you got left?

If you are looking for interesting and robust challenges to conventional evolution Cornelius Hunter is the man to see.

In in this post he raises a subtle point regarding the philosophy of science: Just what is Methodological Naturalism? Cutting a long story short it seems that there are implicit self referencing problems in the concept. If we claim to be methodological naturalists we are presumably restricting ourselves to a particular epistemological method. But an epistemology only works given, a-priori, a particular model of ontology. Thus to use a particular epistemological method successfully we require a particular kind of underlying ontology to support it. The question then naturally arises: Can we use methodological naturalism as an epistemic method to check for the existence of an ontology that favours that method? That is, can we use methodological naturalism to validate methodological naturalism? But if we attempt this then we are, (as Hunter points out) carrying out the investigation because we don’t know in advance what the answer to this question is. But in that case methodological naturalism could, for all we know, be based on a false concept of ontology. If methodological naturalism is based on a false ontology, how then can we rely on it to return a reliable answer about itself? Seemingly, then, there is an impasse here whose only solution is to hope that the methods we are using come up trumps in kind of self-affirming feedback cycle. Does this amount to a kind of gambler’s winning streak, a faith even?

This post from Hunter is more a direct challenge to evolution. He remarks on the sophistication of flowering plants’ defense systems against bug infestation, and then goes on to ponder the possible evolutionary scenario. He thinks, however, not just about the “winning” path of change that has lead to the final system but the myriad failed trials en-route:

And of course these designs are observed by us only because they were the evolutionary winners. They are the proverbial tip of the iceberg. For every winner there are untold myriad losers. The designs that produced some other chemical rather than benzyl acetone. The designs that detected chemicals that the caterpillars don't secret. The designs that didn't couple with the detection system. The designs that produced secretions that had no effect on the caterpillars. The designs that wreaked havoc on the flowering process rather than merely altering the flowering time. And so on, and so forth. The plant must have been a veritable idea factory, churning out all manner of mostly useless Rube Goldberg devices.

This line of thought can be applied in general: Every organic structure we see has, according to conventional evolution, come at the expense of a myriad failed “experiments” along the way; the structural dead ends, the routes that go nowhere are many, perhaps just too many for the random agitations of thermodynamic diffusion to successfully explore and give us a successful result in “polynomial” time. Although, this is a hand waving argument, it nevertheless comes over with intuitively compelling force and it is one that is worth pondering in relation to every organic structure we see. Hunter is right to bring it to our attention: It is not at all clear that physics applies enough constraint on morphospace to prevent the thermodynamic “search algorithm” being exponentially swamped by too many dead end structures; in other words the pathways linking a reducibly complex set of structures in morphospace may be far too “thin” to be probable routes of change. Or perhaps the morphospace of our physical regime may not even be populated by a reducibly complex set of stable/self perpetuating structures.

But as is often the case with the anti-evolution ID theorists they are open to the criticism of only being negative: Cornelius Hunter’s blog largely deals in anti-theory; he is an anti-theorist; that is, his main aim is to shoot down evolution, but as far as I can tell he provides no alternative history of origins that in turn can be shot down by a stalking assassin.

Evolutionists: What do they aim at if they've got no targets?


Friday, January 15, 2010

Up Front Front Loading

Spot the front loading is this theory.

This post on Uncommon Descent has proved interesting. Below I’ve quoted parts of it and added my own comments:


I have argued before that the core of ID is not about a specific theory of origins. In fact, many ID’ers hold a variety of views including Progressive Creationism and Young-Earth Creationism.

Comment: I understand the above to mean that in the poster’s opinion ID concerns not a history of origins but the ultimate source of information (which is identified with intelligence). Well, I suppose somebody from the anti-evolution ID community would have to say this because, as we know, there is little consensus amongst them on the course of natural history: There’s a lot of difference between an Old Earth theory of common descent and Young Earth Creationism. In fact as far as an account of natural history is concerned those ID theorists who believe in an Old Earth and common descent are closer to standard evolutionists than they are to YECs! But presumably this UD poster regards belief in the ultimate cause of life a more important category marker than which story of natural history one believes. But at UD the ironies and paradoxes abound: In this post I comment on a UD post by William Dembski where he appears to distance himself from the YEC community.

But another category that is often overlooked are those who hold to both ID and Common Descent, where the descent was purely naturalistic. This view is often considered inconsistent. My goal is to show how this is a consistent proposition.

Comment: The author then goes onto to consider a form of genetic “front loading” as evidence of the work of intelligent agency in the history of life. As this front loaded genetic information is implemented by life at different junctures in prehistory it results in a bifurcating tree of common descent. (Although the author is careful to state that he himself does not hold this view)


Behe’s actual view, as I understand it, actually pushes the origin of information back further. Behe believes that the information came from the original arrangement of matter in the Big Bang.

Comment: That’s what you might call the ultimate "up front" front loading into matter. Notice a habit of mind in operation here: It is taken it for granted that front loaded “information” will be reified in a complex arrangement of matter; in this case the arrangement is envisaged to be implicit in the arrangement of matter in the “cosmic egg” of the Big Bang; it’s either that and/or a given arrangement of atoms in a very large genome.

And, as hopefully has been evident from this post, the mode of evolution from an information-rich starting point (ID) is quite different from that of an information-poor starting point (neo-Darwinism).


Comment: This is where he goes wrong in my view: One can analyze neo-Darwinism in the same counterfactual way that this UD poster is considering genetic (an even Big Bang) front loading and thus come to the conclusion that if neo-Darwinism has happened then it can only have done so because of an information rich front loading; in this case the front loading would be found in a reducibly complex arrangement of structures in morphospace (See my last post). Ergo, a working conventional neo-Darwinism becomes just as much a form of front loaded ID as genetic front loading. The reason why it is difficult to spot “Darwinism” as a front loaded system is that the information has no material reification in the form of an arrangement of “solid matter”; instead it is to be found in the platonic realm of morphospace.

In Darwinism, each feature is a selected accident.

Comment: “A selected accident” – precisely; something is doing the “selecting” and that something is the “information source” and that information source is found in the arrangement of morphospace. However, that arrangement would have to be implicit in the laws of physics. But I suspect that the anti-evolution community would take exception to that suggestion because they have a low view of the complexity generating powers of algorithms and laws. (Note: I said “complexity generating powers” and NOT “Information generating powers”)


I think that agency is a distinct form of causation from chance and law. That is, things can be done with intention and creativity which could not be done in complete absence of those two.

Comment: “Chance and Law”; well that’s a step in the right direction – it’s much better than “chance and necessity”, although I think “Law and Disorder” is even better. Yes I agree that the a-priori complexity of intention and creativity can reach solutions that are practically unreachable to law and disorder. However if this UD poster is a theist why would he want to draw a sharp distinction between agency (=Divine Intelligence) and “chance and law”? If he is a theist then wouldn’t he believe that “chance and law” are ultimately sourced in divine agency and thus the ultimate cause of all that chance and law generate is to be found in that agency? Aren't "chance and law" a theory of origins rather than an ultimate cause? Is he underestimating what divine intelligence, the ultimate cause, can achieve through law and disorder? In my view “chance and law” are a description of the status quo, a theory of origins, and as such they are not causes in and of themselves.


Final Comments:

1. As I have remarked before the anti-evolution ID community seem confused about the difference between information and complexity: They expect high information objects to be complex (That need not be the case as I have already suggested in previous posts) and therefore the mantra that “you can’t create information” translates to “you can’t create organized complexity”. This is why they don’t believe in evolution and abiogenesis; to work, these processes are required to create organized complexity and therefore in the anti-evolutionist’s mind they must be creating information which of course they can't do.

2. If I am right in suggesting that common or garden evolution could equally be mooted as an ID candidate then this means that the span of possible ID theories ranges from YEC theory, through genetic front loading to neo Darwinism! However, it is likely that the anti-evolution ID community will resist this conclusion: Given all the name calling that has been traded between anti-evolutionists and the evolutionists it is simply impossible for the anti-evolution community to call a moratorium on bog standard evolution on the basis that it can be mooted (albeit counterfactually) as an ID candidate because they would lose face and become a laughing stock. Moreover the anti-evolution community has committed itself to portraying bog-standard evolution as a “natural” process that pretends to be able to create information and thereby the seat of a godless lie. In the mind of the anti-evolutionists the “natural forces” of evolution are set over and against Divine Agency. Hence they don’t see just one “prime mover” but a trinity of causal candidates: God, chance and necessity with the latter two as upstart pretenders. No doubt if pressed they would confess that “chance and necessity” are also down to the Divine Mind. But “chance and necessity” has become blighted by a close association with atheism and is seen as a tool of an atheist conspiracy, to be resisted at all costs. I’m reminded somewhat of the fact that acceptance of the Gregorian calendar was at first resisted in England; it was too closely associated with the papacy to be immediately acceptable.

Friday, January 08, 2010

Darwin Bicentenary: Summing up Part 1.

Now that the Darwin Bicentenary year has passed and I have worked through 30 parts of this series re evolution and Intelligent Design, I thought it time to take stock of progress, if any at all.

In such a wide interdisciplinary subject as evolution I decided to considerably restrict my terms of reference, and so I have merely nibbled at one small corner of the subject. I have largely focused on two areas, both of which are very important to Intelligent Design, and which are also in line with my particular experience and interests; namely, the second law of thermodynamics and the irreducible/reducible complexity question. Moreover, the anti-evolution ID community seems to think that these two areas contain killer arguments against evolution. Whilst it may be true that some other quirk of chemistry, physics and mathematics makes evolution impossible I find that neither of these subject areas provides unequivocal objections against evolution.

Firstly let me briefly explain my position on Intelligent Design. What I call “Elementalist” explanations will, I suggest, never arrive at self-explanation (i.e asiety). Elementalist explanations describe the cosmos using increasingly compact equations and algorithms. Barring the use of “Turtles all the way down” regresses this type of explanation ultimately leads to an irreducible core of data and logic that can undergo no further explanatory compression and therefore results in a logical hiatus. These compacted explanatory/descriptive objects are too simple to hold out the prospect of providing “self explanation”. My guess then, for what it is worth, is that we must look to a-priori infinite complexity for ultimate explanations. It may be that this “infinitely complex something” which has “chosen” and given us our particular physical regime is utterly insentient and impersonal,* but for me this opens the way for a consideration of the complexities of theism and therefore of Intelligent Design. I must add, however, that this is no argument which can or should be used to pressure atheists into belief. As a pathway to theism it is my own particular pilgrimage and I am not necessarily expecting anyone to follow or think worse of them if they don’t.

That I am in one sense in the ID camp means that I must carefully distinguish myself from the ID community represented by, say, Uncommon Descent. For reasons of their own they are vociferous in their anti-evolutionism and therefore I refer to them as the “anti-evolution community”, in order to distinguish them from my own position. In fact during UD’s watch the ID debate has become a de facto ID vs. evolution debate and has become so polarized that many perceive Intelligent Design as necessarily anti-evolutionist and vice versa.

None of this is to say I have any emotional commitment to evolution; for me evolution is like a piece of conceptual engineering that I have on my bench and is proving to be a worthwhile study. As I have often maintained, to create a form of evolution that works seems to require as much a feat of engineering design than otherwise less implicit creative dispensations. Thus, the bear idea of Intelligent Design is not sufficient to rule out evolution as a design option. Set against this view is the anti-evolution community’s implicit depiction of evolution as a kind of primeval and elemental Chaos Monster, an upstart pretender from the abyssal deep incapable of competing with the real Divine Creator. Thus by implication those who merely court evolution (like myself) may be in danger of being accused of inadvertently conniving with a Satanic conspiracy against the living God, thus assisting the cause of a demiurge that pretends to able to create but in reality is a evil fraud.

However, be that as it may, in this particular post I want to focus on the positive and stimulating aspects of the anti-evolution community, aspects that I have focused on over the last year or so:


ONE) For me, one the best parts of the anti-evolution community’s work is that of William Dembski’s papers on information conservation. This work, as far as I can tell, is sound, and is in line with my own understanding that whatever way one cuts the cloth our cosmos has to be “paid for” either by very improbable preconditions or an enormous (and very speculative) multiverse. It echoes my understanding that our current mathematics/science will always face an ultimate “in principle” logical hiatus. Logical necessity has a probability of 1 and thus has zero information content, but it seems that our scientific logic will never find necessity and thus will always be information laden with contingency; an apparent “free lunch” from who knows where. Dembski refers to this as “active information”. But I must remark here that it seems that Dembski’s work is not an attack on evolution per se. In fact in a UD post Dembski described one of his books as not inconsistent with theistic evolution. (See this post for more details). In short Dembski’s work tells us that if evolution has happened then it can only happen with the right burden of active information on board: Evolution or no evolution, both require what to the theist looks like a design choice. Thus a theist cannot write off evolution on the basis that it pretends to create information out of some “know nothing” primeval elemental and “natural” force. In the final analysis the information content of evolutionary processes are as much an unjustified “front loaded” free lunch as that provided by the magician God of Young Earth Creationism who uses magic words to “speak the world into existence”.

TWO) My acceptance of Dembski’s work has to be qualified: As we trace the source of life back through a sequence of highly improbable mathematical preconditions we eventually come up against a logical hiatus where a particular set of contingent conditions are taken as brute givens. What space of possibility are we to use to assign probabilities to these source conditions? In fact, since relative to our knowledge these conditions are givens there is then no ontological space known to us from which these conditions have been selected by a higher process. Thus, is it meaningful and intelligible to assign probabilities to these conditions if we know nothing of the process that has created them? Of course, these conditions can be viewed as a selection taken from a platonic space of mathematical possibility and thus a probability be assigned them using Bernoulli’s principle of insufficient reason. This is in fact Dembski’s practice. I’m inclined to accept this practice, but not without reservation.

THREE) Irreducible complexity is a key concept used by the anti-evolution ID movement as a killer challenge to evolution. I myself regard it as an objection of intermediate strength rather than unequivocal. Evolution and abiogenesis require stable structures to be linked into a connected set of incrementally separated structures allowing thermodynamic agitations to cause diffusion from one structure to the next; this is what I mean by “reducible complexity”.** But it is very difficult to decide between irreducible and reducible complexity on an empirical basis; in particular if evolution is computationally irreducible and the fossil record sparse. Methodological arguments as to which side of the debate the scientific onus falls to empirically decide the case may be mooted. But in absolute terms it is difficult to be sure one way or the other; one man’s “best explanation” methodology may be another man’s bigotry. One thing, however, we can be sure of is that evolution and abiogenesis can’t proceed without reducible complexity. Hence, the question of reducible or irreducible complexity is very important to both sides of the debate. The anti-evolution community is right to stress it, but for me the case for irreducible complexity is not without reasonable doubt.


So having summed up what I think of as the best aspects of the anti-evolution movement, in my next summing up post I will consider what I believe to be its worst aspects.



* I must qualify this by saying that I am inclining towards the opinion that the idea of a cosmos without sentience and perception is unintelligible

** "Reducible complexity" is, in fact, one means by which an evolutionary process can be "front loaded" with Dembski's active information.

Thursday, December 24, 2009

Darwin Bicententary Part 30: The Mystery of Life’s Origin, Chapter 9

Continuing my consideration of the three online chapters of Thaxton, Bradley and Olsen’s book “The Mystery of Life’s Origin

Absence of Evidence…

In chapter 9 of the Mystery of Life’s origin Thaxton, Bradley & Olsen start by expressing the essential mystery of life’s origin:

In Chapter 7 we saw that the work necessary to polymerize DNA and protein molecules from simple biomonomers could potentially be accomplished by energy flow through the system. Still, we know that such energy flow is a necessary but not sufficient condition for polymerization of the macromolecules of life. Arranging a pile of bricks into the configuration of a house requires work. One would hardly expect to accomplish this work with dynamite, however. Not only must energy flow through the system, it must be coupled in some specific way to the work to be done.

What TB&O are saying here is that although the energy required to carry out the work of organizing atomic and molecular units into a living configuration may be available, that energy in itself is useless without some kind of information engine directing how it is to be used. An analogous situation arises if one has a large work force of men; this work force is all but useless unless they have the information on how to act.

In chapter 9 TB&O look at the crucial question of abiogenesis; that is, the question of whether disorganized elemental matter has the wherewithal to organize itself into complex living structures. They consider a variety of proposed mechanisms: Pure chance, natural selection, self ordering tendencies in matter, mineral catalysis, non-linearity in systems far from equilibrium and the direct production of proteins and DNA. Needless to say they are of the opinion that at the time of writing no substantive model of abiogenesis exists. I am not aware that the situation has improved substantially in favour of abiogenesis since TB&O’s book was written in the mid eighties.

TB&O’s main point comes out very clearly when they consider experimental attempts to directly form polypeptides. TB&O believe that most theories of abiogenesis founder in one important respect: Whilst they accept that under the right disequilibrium conditions organic polymers can be synthesised, they point out that these polymers are randomly configured, and thus by implication contain no useful information content. In TB&O’s opinion the chief challenge to theories of abiogenesis is the question of what supplies the necessary “configurational entropy work”. In their words:

Virtually no mechanism with any promise for coupling the random flow of energy through the system to do this very specific work has come to light. .

TB&O tender experimental results which suggest that the formation of polypeptides shows no bias in the frequency of connections:

The polymerization of protein is hypothesized to be a nonrandom process, the coding of the protein resulting from differences in the chemical bonding forces. For example, if amino acids A and B react chemically with one another more readily than with amino acids C, D, and E, we should expect to see a greater frequency of AB peptide bonds in protein than AC, AD, AE, or BC, BD, BE bonds.

Furthermore, the peptide bond frequencies for the twenty-five proteins approach a distribution predicted by random statistics rather than the dipeptide bond frequency measured by Steinman and Cole. This observation means that bonding preferences between various amino acids play no significant role in coding protein. Finally, if chemical bonding forces were influential in amino acid sequencing, one would expect to get a single sequence (as in ice crystals) or no more than a few sequences, instead of the large variety we observe in living systems. Yockey, with a different analysis, comes to essentially the same conclusion.

Coupling the energy flow through the system to do the chemical and thermal entropy work is much easier than doing the configurational entropy work. The uniform failure in literally thousands of experimental attempts to synthesize protein or DNA under even questionable prebiotic conditions is a monument to the difficulty in achieving a high degree of information content, or specified complexity from the undirected flow of energy through a system.

This kind of evidence supports TB&O’s belief that there is no natural information mechanism that can correctly configure an arrangement of atoms:

We have noted the need for some sort of coupling mechanism. Without it, there is no way to convert the negative entropy associated with energy flow into negative entropy associated with configurational entropy and the corresponding information. Is it reasonable to believe such a "hidden" coupling mechanism will be found in the future that can play this crucial role of a template, metabolic motor, etc., directing the flow of energy in such a way as to create new information?

*****

My own reaction to TB&O’s rather negative assessment of abiogeneis is that if they are genuinely interested in the possibility of a natural route from inorganic matter to functioning organisms they are looking in the wrong place and therefore find what they expect: No evidence of abiogenesis. But if they do start looking in the right place there is a major epistemological snag: it is in the nature of abiogenesis (and evolution as well) to be difficult to observe; that is, it is an epistemically challenging domain. 

If natural abiogenesis has occurred on our planet then, as I have already mooted in this Darwin Bicentenary series, the coupling mechanism providing the information for abiogenesis is likely to be found in the arrangement of structures in morphospace; for abiogenesis to work there must exist a connected class of stable structures with complexities ranging from the very simple to the very complex; that is abiogenesis requires biological structures to be reducibly complex. This connected class of biological structures is an object that is neither dynamic nor visible but instead is a static platonic structure which, if it exists, must be implicit in the laws of physics. The information for abiogenesis and evolution is not going to be found in chemistry that encodes the right information in biopolymers in a straight forward way. The information structures, if they exist, will be found in a mathematical object spanning morphospace, an object that has no material reification.

The scientific challenge faced by those committed to abiogenesis (and evolution too) should not be underestimated because abiogenesis may be a computationally irreducible processes. If this is true then it follows that there is no simpler analytical proof of the existence of abiogenesis other than to observe the actual processes that generate life. In this case the evidence of abiogenesis will rest less on theoretical analysis than it will on observation. Hence an absence of fossil evidence becomes a serious barrier to scientific progress in abiogenesis and will add plenty of fuel to the fire of anti-evolutionism. Another problem for theories of abiogenesis is that if there is a natural route from inorganic matter to living structures then fossil evidence is likely to largely comprise of unstructured chemical signatures of equivocal interpretation. Researchers are thus left with little choice but to propose very tentative and speculative chemical scenarios that might look as though they have the potential to generate life. But if TB&O have presented the evidence fairly then it is clear that no consensus theory of abiogenesis exists, let alone a theory with a robust observational base.

TB&O in common with other anti-evolutionists conflate information and complexity:

Regularity or order cannot serve to store the large amount of information required by living systems. A highly irregular, but specified, structure is required rather than an ordered structure.

Whilst I would give a qualified acceptance to the idea that information is conserved in that it requires prerequisite resources either in the form of highly improbable pre-conditions or the huge spaces of a multiverse, is it not true in general that structural complexity cannot be “created” from regularity in relatively short times. Fractals and pseudo random sequences are an example of a “free lunch” complexity derived from simple starting conditions in polynomial computation times. As yet I know of no principled reason why the complex arrangements in morphospace required by abiogenesis cannot also be generated relatively rapidly from the right physical regime of laws. ID theorists may tell us that “equations and equations don’t generate Complex Specified Information” but in spite of that assertion equations can A) be the depository of large amounts of information, B) create complexity in polynomial time, and C) therefore be used to specify complex static forms. (See my last post)

The anti-evolutionists appear to rule out abiogenesis on the basis of the methodological rule “absence of evidence is evidence of absence”. But ardent evolutionists may claim that the onus is on ID theorists to demonstrate that biological structures are irreducibly complex and therefore cannot be product of abiogenesis and evolution. See this post of mine for the ironies of this stand-off between evolutionists and anti-evolutionists. The irony is further compounded when one hears atheists attacking theism on the basis that there is no God because “absence of evidence is evidence of absence”. God and evolution are both very complex objects so perhaps it is not surprising that like abiogenesis it is in the nature of God to be difficult to observe!

Saturday, December 12, 2009

Darwin Bicentenary Part 29: Dembski and Marks’ Latest Paper.

My Unfinished Business with the Anti-Evolution ID Community

William Dembski and Robert Marks’ latest published paper can be found by following a link from this Uncommon Descent post. Its content is very reminiscent of this paper which I considered here here and here*. In this latest paper, however, Marks and Dembski shift the focus to what they call Bernoulli’s principle of insufficient reason, a principle that justifies their assuming equal a priori probabilities over a space of possibilities. It is likely that they feel the need to justify this principle further as their conclusions regarding the “conservation of information” stand on this assumption. I raised this very issue in this discussion on UD where I said:

Yet another issue is this: When one reaches the boundary of a system with “white spaces” beyond its borders how does one evaluate the probability of the systems options and therefore the system’s information? Is probability meaningful in this contextless system? I am inclined to follow Dembski’s approach here of using equal a priori probabilities, but I am sure this approach is not beyond critique.

The choice of using Bernoulli’s PrOIR arises when one is faced with a set of possible outcomes for which there is no known reason why one outcome should be preferred over another; on this basis Bernoulli’s PrOIR then assigns equal probabilities to these outcomes. However, probability (as I proposed in my paper on probability) is a quasi-subjective variable and thus varies from observer to observer and also varies as an observer’s knowledge changes. In particular probability estimates are liable to change if knowledge of the context in which the outcomes are embedded increases. For example consider a six sided die. Initially, if we know nothing about the die then using PrOIR we may assign an equal probability of 1/6 to each side. However, if after, say, examination of the die we find it to be biased, or find that it repeatedly returns lopsided frequencies when it is thrown, then these larger contexts of observation have the effect of weighting the probabilities of the 6 possible outcomes. Thus given this outer context of observation we find that we can no longer impute equal probabilities to the six possible outcomes. But, and this is Dembski and Marks’s essential point, the cost of buying better than PrOIR results on the die is paid for by an outer system that itself has improbable lopsided probabilities. From whence comes this improbable skew on the outer system? This skew can only itself come from a wider context that itself is improbably skewed and so on. According to Dembski and Marks it is not possible, in realistically sized systems, to destroy this improbable loading. Ergo, some kind of conservation law seems to be operating, a conservation law Dembski and Marks call the conservation of information.

I believe Dembski and Marks’ essential thesis to be unassailable. Bernoulli’s PrOIR can be used to set up a null hypothesis that leads one to conclude that the extremely remote probabilities of living structures point to a kind of loaded die in favour of life. Thus, evolution, if it has taken place, could not have happened without biasing its metaphorical die with what Dembski and Marks calls “active information”. In fact as I discussed in this post even atheist evolutionists, in response to one of Dembski and Marks’ earlier papers admit that evolution must be tapping into the resources of a physical regime loaded in favour of life, most likely by the laws of physics; but from whence came our particular physical laws? For these atheists, however, this is effectively a way of shelving the problem by projecting it onto the abstruse and esoteric realm of theoretical physics where it awaits solution by a later generation of scientists. Hiding up the mystery of the origins of living things in the ultimate genesis of the laws of physics allows atheists to get on with their day to day life without any worries that some unknown intelligence could be lurking out there.

It is ironic that multiverse theory is a backhanded acknowledgement of Dembski & Marks’ essential thesis. Speculative multiverse theory assumes that in an all but infinite outer context the butter of probability is spread very thinly and evenly. Hence, because no configuration is given a probabilistic bias over any others it seems unlikely that there is an intelligence out there fixing up the gambling game in favour of life. But if we are to conclude that we can shrug off life as “just one of those chance things” probability theory requires the multiverse to be truly huge. These immense dimensions are required to offset (or "pay for" in D&M's terms) the minute probability of our own apparently “rigged” cosmic environment. That the multiverse must be so extravagantly large in order to make our highly improbable context likely to come up in the long run so to speak, is eloquent testimony of just how active Dembski’s and Marks’ active information must be if our own cosmos should in fact prove to be a one-off case.

We must acknowledge of course that in the final analysis Dembski & Marks are anti-evolution ID theorists and therefore have their sights on “evilution”. They conclude that evolution, if is to work in realistic time, must be assisted by a large dollop of active information; that is, the die must be loaded in its favour. As I see it there is no disagreeing with this general conclusion, but the irony is that just as the multiverse theorists acknowledge the concept of active information in a backhanded way, Dembski & Marks provide us with the general conditions of the scenario that evolution needs in order to succeed.

For Dembski & Marks’ paper, as have all the papers on “active information” I am aware of, provides no killer argument against evolution. In fact if anything it points to the general criteria that must be fulfilled if evolution is to work. They tell us that if evolution has happened then somewhere there is a bias, an improbable skew hidden up in the system. The first port of call in the search for this bias is, of course, the laws of physics. But as a rule, anti-evolution ID theorists will try to tell us that simple equations can’t create information. Their argument, such as I recall it, seems to be roughly based on the following lines.

1. The laws of physics are to be associated with necessity

2. Anything that is necessary has probability of 1.

4. The information content of anything with probability of 1 is zero

5. Therefore equations have no information content

6. Information is conserved. (I give this point a qualified acceptance)

7. Therefore from 5 and 6 it follows that equations can’t create information

The main fault with the above argument is point 1. Christian anti-evolution ID theorists often seem to be subliminal dualists and this inclines them to contrast the so called “natural forces” over and against the superior creator God. These inferior "natural forces" they inappropriately refer to as “chance and necessity”. The so-called “necessity” here is identified with the laws of physics or succinct mathematical stipulations in general. But such mathematical stipulations are themselves an apparent contingency and, as far as we are concerned, don’t classify as necessity; therefore they can embody information.

To see how laws can be the depository of information, it helps to understand that the quantity “information”, as Dembski and Marks use it (= - log[p] ), can only pick up high improbabilities; it is not very sensitive to the exact details of the source of those improbabilities. This is because it lumps together a field of independent probabilities into a single quantity by summing up the logarithms of those probabilities. Therefore “information” as an index cannot distinguish the difference between single probabilities and complex fields of probability. In short information is not a good measure of configuration. Thus single items of probability, if they have sufficient improbability, can embody as much information as collections or configurations of probabilities. This means that although it is true that equations don’t create information, they may yet be the depository of information in their own right if they are themselves highly improbable items; and this can be the case if those equations have been selected from a huge space of possibility over which Bernoulli’s PrOIR is applied. Equations are far from what anti-evolutionist ID theorists call “necessity”. In fact irony is piled upon irony in that we need look no further than Dembski and Marks’ paper for evidence that relatively mathematical simple stipulations may be the depository of active information.

In Dembski and Marks’ latest paper we read this:

Co-evolutionary optimization uses evolutionary tournament play among agents to evolve winning strategies and has been used impressively to identify winning strategies in both checkers [25] and chess [26]. Although co-evolution has been identified as a potential source for an evolutionary free lunch [69], strict application of Benoulli’s PrOIR suggests otherwise.

Dembski and Marks then go on to rightly observe that such systems have a built in metric of gaming success and that this “loads the die” with active information:

Claims for a possible “free lunch” in co-evolution [69] assume prior knowledge of what constitutes a win. Detailed analysis of co-evolution under the constraint of Bernoulli’s PrOIR remains an open problem.

Once again I agree with Dembski and Marks; the co-evolutionary game must be suitably loaded for complex game winning strategies to evolve. But this is to miss the giveaway point here. It seems that all one needs do is to define some succinct game playing rules, then impose a constraint in the form of some relatively simple game wining metric and then given these two constraints the “thermal agitations” of randomness are able to find those complex game winning strategies in polynomial time. A presumed corollary here is that the space of gaming strategies is reducibly complex. Thus it follows that the stipulation of the rules of the game and the selection metric are sufficient to define this reducibly complex space of game wining strategies. These stipulations have the effect of so constraining the “search space” that the “random” agitations are no longer destructive but are the dynamic that seeks out the winning solutions.

The foregoing is an example of an issue I have repeatedly raised here and have even raised on Uncommon Descent. The issue concerns the question of whether in principle it is possible to define complex structures (such as living things and games playing strategies) using relatively simple reductionist specifications and then find these specified complex structures in polynomial time. The generation in polynomial time of complex fractals and pseudo random sequences are examples of a polynomial time map from reductionist mathematical specifications to complex outputs. In the checkers game example the reductionist specifications are the gaming rules and the metric that selects successful strategies and these together define a complex set of winning game strategies that can be arrived at in polynomial time using an algorithmic dynamic which uses random “thermal agitations”. Evolutionary algorithms that identify game playing strategies are an example where simple mathematical stipulations map to complex outputs via a polynomial time algorithm. Of course, because evolutionary algorithms work for games doesn’t necessarily mean that the case is proved for biological evolution given the laws of physics, but the precedent set here certainly raise questions for me against the anti-evolutionist ID theorists tacit assumption that evolution in general is unworkable. But at the very least it seems that evolution does have an abstract “in principle” mathematical existence.


It is over this matter that have I have unfinished business with the anti-evolution ID theorists. I have failed to get satisfaction from Uncommon Descent on this matter. I agree with the general lesson that barring appeal to very speculative multiverse scenarios information can’t be created from nothing – certainly not in polynomial time. But where I have an issue with the ID pundits of Uncommon Descent is over the question of whether or not it is possible to deposit the high information we observe in living structures in reductionist specifications such as the laws of physics. I suggest that in principle simple mathematical stipulations can carry high information content for the simple reason that maps which tie together simple algorithms to complex outputs via a polynomial time link are very rare in the huge space of mathematical possibility, and thus have a low probability (assuming PrIOR) and therefore a high information content. The confusion that I see present amongst Uncommon Descent contributors is the repeated conflation of high information content with complexity; as I have said, information as a measure lumps together a configuration's improbability into one index via a logarithmic addition over its field of probabilities and thus it cannot distinguish single high improbability items from complex configurations of independent probabilities. Elemental physics may therefore be the depository of high information.

As I look at the people at the top of anti-evolution ID community like Dembski and Marks, it seems to me that the implications of their work are not fully understood by the rank file ID supporter. For that supporter, it seems, leans on his academic heroes, quite convinced that these heroes provide an intellectual bulwark against the creeping "atheist conspiracy". The impressive array of academic qualifications that these top ID theorists possess is enough, it seems, to satisfy the average ID supporter and YEC that the battle against “evilution” has been won. The very human attributes of community identification, crowd allegiance, loyalty and trust loom large in this scientific debate.



* I originally (and perhaps wrongly) attributed this paper to Dembski; it has no author name although its URL has Marks’ name in the directory path.

Friday, December 04, 2009

University of Big Disappointments

This University of East Anglia “Climategate” business that William Dembski keeps banging on about is rather (too) close to home - literally almost within walking distance; a childhood stamping ground in fact. Fears, doubts, vulnerabilities come out in private emails; sometimes with a gloss of black humour, bluster and even superciliousness. Trouble is, nobody admits to having them, else they are exploited by opponents and easily misrepresented. But doubts, vulnerabilities and even fears are the stuff of genuine science, as I suggested in this post long before “Climategate” had blown up. There is such a thing as a “scientific attitude”. However, covering up, secrecy, self delusion, conceit, pride and above all epistemological arrogance are the antithesis of that attitude.


The Green House Effect at the University of East Anglia: Don’t cast the first stone if you live in a glass house like this.


Addendum 6/12/09. The Smith and Jones affair.

Below is a YouTube video that needs to be set beside the claims of those for whom this whole affair is a most wonderful windfall. Conspiracy theorist Alex Jones makes a brief and loud appearance on the video and is caught foaming at the mouth over the UEA emails. It's all very reminiscent of pentecostal evangelical Barry Smith's millennium bug conspiracy, which of course is now all but forgotten. Interestingly and ironically Smith's conspiracy theories implicated George Bush Senior in the new world order conspiracy; so are the religious right and the powerful environment poluting corporation bosses supposed to be part of the conspiracy or against the conspiracy? To add to the "we're all having a ball" feel about these e-mail revelations I am beginning to wonder when David Ike is going to make an appearance; in fact he's probably got a YouTube video out there already, but I just can't be bothered to look.

I wonder what William Dembski thinks of some of the company he is keeping?


Monday, November 30, 2009

From Spears to Aircraft


A Sophisticated Flying Machine: A series of incremental technological changes has brought this configuration into existence.

This post by GilDodgen on Uncommon Descent gives an indication of the central place that Irreducible Complexity (IC) has in the ID theoretical canon; everything in ID theory seems to be either an aspect of this principle or a detail. This is what GilDodgen says:

The more we learn the more it appears that almost everything of any significance in living systems is irreducibly complex. Multiple systems must almost always be simultaneously modified to proceed to the next island of function. Every software engineer knows this, and living things are fundamentally based on software.

Evolution in the fossil record is consistently characterized by major discontinuities — as my thesis about IC being a virtually universal rule at all levels, from the cell to human cognition and language, would suggest — and the discontinuity between humans and all other living things is the most profound of all. Morphological similarities are utterly swamped by the profound differences exhibited by human language, math, art, engineering, ethics, and much more.


GilDodgen then goes on to suggest that the vast differences between human beings and other primates are evidence of the discontinuity imposed by IC.

In biological terms the crucial concept is structural stability; that is the ability to self perpetuate. But this is only possible if the components of a biological structure serve to promote self perpetuation and they can only do this if they work; that is if they function correctly.

Crucially for ID, GilDodgen assumes that functionality only comes in absolutely isolated islands; that is any change of a structure that goes beyond a certain small threshold entails a loss of functionality and thus a non-viable unstable structure. Thus if IC is true then there are no incremental entry points in or out of a stable structure. Thus IC structures (by definition) have no stable precursor structures incrementally separated from them. Ergo, evolution cannot happen.

The inverse of the notion of IC is Reducible Complexity (RC); RC requires that the domains of functionality/stability are so connected that there exist ways of incrementally modifying a structure and yet a stable structure being the result. The RC conjecture is that there is class of functional/stable structures that form a fully connected set and that a broad range of structural complexity is found in this set; if such exists then a barrier to evolution is lifted.

Of great significance to ID theory is this quote from GilDodgen: “Multiple systems must almost always be simultaneously modified to proceed to the next island of function”. That simultaneous modification is the only way to reach the next island of stability is, in the mind of the ID theorist, sufficient evidence of the complete isolation of islands of stability in a huge sea of non-viability; for to jump the gap between islands of functionality using random changes would require the simultaneous adjustment of several features in just the right way. The probability of these changes coming together is utterly negligible and thus a barrier of improbability isolates the islands of stability. The only thing that we know capable of jumping the space between the islands is intelligence.

Currently I have several issues with the concept of IC. The following points are really areas of research rather than killer arguments against IC.

1) If the only way to reach a near neighbor stable structure is by a very rare simultaneous change of features, then the IC case holds. However, if a route exists to a stable neighborhood structures via a “path” consisting of a series of incremental changes to single features ,each of which results in a stable structure, then we have a Reducibly Complex connected set of structures. However, these "single feature change" pathways are likely to be a relatively rare occurrence in the space of all possible changes, thus giving the impression that they don’t exist.

2) Human technological advance has only been possible because the limited quantum of human intelligence (call it i) can leap the gap between islands of functionality. But, and here is the important point, i is not large enough to leap the gap between stone age spears and GilDodgen’s aircraft in one leap. However, it is clear that the islands of functionality in technological configuration space are close enough so that given the quantum of human intelligence, i, the human mind can leap the gaps in functionality leading up to an sophisticated flying machine. It is conceivable that there are structures out there that can never be reached by the human mind because they are effectively irreducibly complex with respect to the quantum of human intelligence i. Nevertheless, it is clear that given human technology, large parts of technological configuration space are connected enough (i.e they reducible complexity with respect to i) to make technological evolution possible. It seems that IC and RC are not binary opposites but come in degrees. Fortunately for us technological configuration space is reducibly complex with respect to i. But, and this is the 64 billion dolllar question, is biological configuration space sufficiently connected with respect to the blind watch maker?


3) Human Intelligence itself appears to employ a kind of evolution; it makes a series of incremental adjustments to its concepts, concepts that are either selected or rejected; search, reject and select is the general evolutionary algorithm.

Sunday, November 29, 2009

Serpentine Logic

That devolved legless reptile is doing its damnedest to stay in the picture

Interesting is this post on Uncommon Descent concerning ID guru William Dembski’s book “The End of Christianity, Finding a Good God in an Evil World”. This is what Dembski says about the book.

My book attempts to resolve how the Fall of Adam could be responsible for all evil in the world, both moral and natural IF the earth is old and thus IF a fossil record that bespeaks violence among organisms predates the temporal occurrence of the Fall. My resolution is to argue that just as the salvation of Christ purchased at the Cross acts forward as well as backward in time (the Old Testament saints were saved in virtue of the Cross), so too the effects of the Fall can go backward in time. Showing how this could happen requires extensive argument and is the main subject of the book. As for my title, “End of Christianity” involves a play on words – “end” can refer to cessation or demise; but it can also refer to goal or purpose. I mean the latter, as the subtitle makes clear: Finding a Good God in an Evil World.

The link to the interview with Dembski also worth looking at.

Some comments

1. Although he minces his words and hedges (Not surprisingly given his particular Christian sub-culture) my guess is that Dembski is an “Old Earth” believer; otherwise his attempt to explain pre-fall evil as a “retroactive” result of an anthropocentric fall would seem rather futile.(Note: If the act of God in Christ is a powerful symbolic declaration of covenant by God and revealed in the fullness of time, then its ability to dispense grace retroactively is less enigmatic, whereas I can make little sense of a retroactive fall in this light)

2. In line with Young Earth Evangelicalism’s world view Dembski assumes evil and suffering to be largely sourced anthropocentrically. And yet the serpent in Genesis 3 could be construed as a symbol of the presence of extra-human evil. The person(s) who penned Genesis 3, as is the wont of arcadian folk, would be ever aware of the lurking presence of evil in an unseen world. There is also the question over the interpretation of the enigmatic Romans 8:18-23 , a passage which hints that the cause of evil and suffering is not quite so crisp and clear cut as traditional Young Earth Evangelicalism would have it.

3. The problem Dembski is addressing comes out of contemporary Young Earth Evangelicalism’s world view and is thus very pressing in his own intellectual circles. Some of us, such as myself, who feel that there are deeper mysteries surrounding the source of suffering and evil have a less pressing need to explain pre-fall suffering. According to my concordance the word “good” of Genesis 1 is not quite the same as the word perfect (~ completion). I wonder what Dembski teaches his seminary students?

I have been a close witness of evangelical philosophy for over thirty years and its sometimes cursory and shallow treatment of suffering and evil is not its only short fall. So much about standard evangelical explanations fail to make sense of the world around us, not to mention its inconsonance with aspects of my own personal experience. It’s no wonder that when pressed the fallback position of many evangelicals is fideism: Viz: “Faith is not always logical, if it was it would not be faith.” In the fideist mind the ability of faith to accommodate mystery is conflated with the ability to swallow illogicality. But I’ll hand it to William Dembski, he is brave guy (and a nice guy as far as I can tell) and he is making a very valiant attempt at being be logical. Pity about some of his fellow pilgrims.

Saturday, November 28, 2009

World Beater

I must admit that I share some of Denise O’ Leary’s doubts about the democratic intentions of the “new atheists”. Stalinist atheism is to liberal atheism as Inquisitional Christianity is to liberal Christianity. Is “new atheism” a Stalinist form of atheism? Well, as far as I’m concerned that question remains to be answered. Given the insults, accusations of “scientific heresy” and unreason screamed out by some “new atheists” (even to fellow atheists) I have to say that I’m not at all confident that democracy would be in safe pair of hands should they gain exclusive political control. They are reminiscent of the revolutionary Marxists: “If you are not for us you are against us; we will see to your ‘re-education’ after the revolution.” The best complexion I can place on it is that in the wider social economy Stalinist atheism and Inquisitional Christianity are a reaction of one to the other and that somehow they cancel out. Hopefully.