Pages

Tuesday, April 30, 2013

Theology and North American ID. Part 1



When Stephen Meyer comes over to the UK why isn’t he speaking to the Faraday Institute and Christians in Science? Something is seriously wrong here.

The above is a video of Stephen Meyer of the Discovery Institute giving a presentation on Intelligent Design at the Royal Horse Guards Hotel, London. Meyer is part of the same North American ID culture that embraces IDist William Dembski. In many ways I like these people; they are intelligent sensitive scholars of the faith who are prepared to reason and will not hold it against your faith if you disagree with them. They are a far cry from the hardened heretic burning cranks of many a fundamentalist sub-culture. (In this presentation we find Meyer, like Dembski, very anxious to dissociate himself from Young Earthism). Nevertheless, they have still been abused by establishment scientists, Christian and atheist alike. I admire their courage in the face of this abuse, but the fact remains I feel rather ambivalent about the ID movement that Meyer represents. They have a tendency to drift toward the political right, but that may be a consequence of them being rejected by the left-leaning academic establishment. The material in Meyer’s presentation is typical of North American ID in that it assumes, as a matter of course, a “natural processes vs. Intelligent agency” perspective.

I hope I’m not doing an injustice to Meyer’s material, but basically he presents the expected dichotomized paradigm, which in this case is expressed thus:  

The cell is a highly complex replicating molecular machine. We have the vaguest ideas how these replicators came about. Therefore it seems very unlikely that “natural” processes generated them, and our best explanation is that they are a product of intelligence.

Reading between the lines here then it is clear that this is essentially the “God of the gaps” argument in its classic form; a failure in “law and disorder” science to elucidate a problem is made good with a proposed “Intelligent intervention”. We know, of course, that by “intelligent” these IDists are actually thinking “God”.  And this is why I have an issue with North American ID. These IDists are pressing God into the role of a player who us on the stage of the cosmos rather than being an overall choreographer and enabler, both immanent and eminent in respect of the cosmos. In North American ID God has an auxiliary role in creation in that he complements the perceived inadequacies of “natural” Law and Disorder explanations.

Perhaps sensing that the Divine Creator must have a more fundamental role rather than just being a contriver of unlikely configurational boundary conditions, sometimes this brand of ID will suggest that in principle, both philosophically and physically, life could not be generated by a physical regime. The underlying motive here is to show that it is logically impossible for a physical regime to generate life and therefore it is logically necessary for God to contrive the required boundary conditions, “just like that”, as Tommy Cooper used to say. To this end, for example, evolution is sometimes wrongly caricatured by IDists as an attempt to get something for nothing; or in words of one IDist a way of “explaining how the world could have arisen on its own”.  However, evolution, as it is currently understood, is certainly not a logically self-sufficient process. Evolution, if it works, classifies as a highly contingent and “cleverly” selected algorithm for generating life and should not be caricatured using terms loaded with connotation like “materialistic” , “blind”, “random”, “undirected” or “a free lunch”.

As I have said many times before I'm uncomfortable with the North American IDist's "Naturalism vs. Intelligence" dichotomy on several counts; not least on the count that its apparently non-committal introduction of a third category of causation called “intelligence” (which in and of itself is not at all untoward) is, in my view, inappropriate when applied to a Christian God; we are not talking here of an alien applied chemist who created life by tampering with the basic molecular building blocks of our physical regime. As a Christian theist Meyer doesn't believe that the intelligence he is talking about works within the cosmic law and disorder regime as if he were a molecular engineer; rather, Meyer’s belief must be that God is a totalizing entity in Whom the cosmic stage and all that happens on it is immersed. Therefore when we look at “natural processes” there is a sense (although one mustn't take one’s metaphors too literally here) in which we are seeing the creative and sustaining power of God rather than something which can be contrasted over and against Intelligent agency. Given that a Christian God is sovereign manager, choreographer and enabler, then whenever we look at something generated by natural processes we are doing God an injustice if we say natural forces did it.

So-called “natural processes” aren't one half of a “Nature vs. Intelligence” dichotomy but are a manifestation of a Sovereign Management that is intimately bound up with the cosmic substrate.  Yes, it is a truism that there will always be a scientifically unpluggable gap in the sense of there being an irreducible Grand Logical Hiatus that embraces the cosmos everywhere and everywhen. But what Meyers presents us with is, in fact, the old style God of the Gaps scenario whereby the claim that God did it is only plausible up until such a time that someone actually succeeds in explaining OOL in terms of “law and disorder” processes. At which the point (should it ever arise), it will then look as though, according to Meyer’s philosophy, God didn't do it. This is a consequence of North American ID’s tendency to put Divine Intelligence on the same logical level as one might an alien molecular engineer. If you can disprove alien interference in natural history with a good theory of evolution and OOL then perhaps you can disprove God’s “intervention” in biology as well! North American ID shouldn't be so emphatic about Intelligent Design being science, because it is clear that when Intelligent Design becomes bound up with ultimate origins it is an interdisciplinary subject that straddles the boundary of both science and theology.

One often hears the call that ID isn't science. In my opinion it does include science but not hard science: As is always the case when intelligence/personality is involved there is fair measure of inscrutability introduced. I suppose it is conceivable that a sophisticated replicator could be taken as one of nature’s created givens and that a science of evolution of sorts could then proceed on this basis. But as regards OOL the fact is that positing an intelligence of unknown powers, motives and methods introduces a wild card that is clearly not conducive to very tractable science.

OK, so perhaps Meyer and his friends are right, and our selected physical regime is unable to generate even a single cell replicator. If that is true then the science of OOL is not destined to make much progress except perhaps to annotate a description of the workings of the cell with a series of “Wows!”.  ID science, I believe, can’t get a great deal further than that; making predictions based on the guessed motives and methods of a highly alien intelligence is a hazardous business to say the least. But the irony is that if Meyer believes in a Divine Super-Intelligence that is both immanent and eminent in relation to the cosmos this actually subverts the North American case that is inclined to rule out “natural processes” as the generator of life. For if we are talking about Divine intelligence, then presumably that intelligence is great enough to muster the computational resources needed to select a physical regime that can generate life. Tractable science would then be back on the agenda!

In part 2 I will look at some of the specifics of Meyer’s talk, in particular the North American ID community’s proprietary concepts of self-organization and information.

Some previous posts relevant to the above material:


Tuesday, April 23, 2013

Mangling Science Part 2: Opening up Ken’s Can

My advice to religious novices is "Stay away; we're not talking spaghetti here"

In the first part of this series I introduced the protestant fundamentalist’s conflict with science and their attempts to solve this problem. One solution is separatism –  that is, start a sect or cult so cut off from society that it circumvents the need to engage with profane knowledge. A second response is to purge one’s science of results that contradict sacred knowledge. It is this second class of response which is the focus of this series and in particular Answers in Genesis’ attempt to arrive at a criterion whereby some scientific results can be retained and some rejected. To this end Ken Ham, in a bog-post dated 19th February and entitled Darwin, Dinosaurs and The Devil, directs us toward an AiG article by Troy Lacey entitled Deceitful or Distinguishable Terms—Historical and Observational Science. Ham and Lacey’s articles make use of what they think of as a distinction between observational science and historical science. For obvious reasons this dichotomy is crucial to AiG’s way of thinking; for it is science’s insights into the past that so obviously contradict AiG’s version of history, a version which compacts billions of years’ worth of events into a mere 6000 years. Their aim, of course, is to rubbish historical science as an epistemic house of cards. In its place they make their own proposals which they claim are at least as well founded. At the same time they think they can hang on to so-called “observational” science in the hope that some of science’s kudos rubs off on them, enough to prevent them looking what they actually are; namely, fundamentally anti-science.

As I said in my last post Ham and Lacey’s articles leave us with a problem.  Ham refers us to Lacey, but combing Lacey’s article we find nothing that defines what they purport to be “observational” science. In fact we have to return to Ham to get a hint of an answer:

….[Secularists] use the word “science” for both historical science (beliefs about the past) and operational science (based on observation that builds our technology).

Here Ham identifies “observational” science with the science used to construct artifacts. The science of artifacts, (which is essentially applied physics and chemistry) concerns itself with the present tense continuous physical laws which govern things everywhere and everywhen. Certainly, we do have an ontological distinction here. History, need I say, is about event particularity whereas the physical sciences concern themselves with generality; that is, the general functions and algorithms which constrain happenings everywhere and everywhen. Particular events and the rules which constrain these events are two very different, albeit related, objects. There is a deep ontological distinction between what’s actually happened and what rules we think those happenings are subject to

But this ontological distinction is of little help to Ham in his attempt to secure a natural distinction in science that allows him to reject historical science as somehow epistemologically second class: Successful experimental testing in the physical sciences depends on the interpretation of experimental texts of previous tests, texts, needless to say, which refer to historical events. These texts must be correctly interpreted in order to at least attempt a duplication of the conditions needed for satisfactory testing. Ergo, history is very important in the testing of the physical sciences. Moreover, even when do feel we have grasped a physical principle with sufficient confidence to use it to build an artifact, that artifact is effectively an experimental apparatus that repeatedly tests those principles. Just how reliably the artifact operates can only be assessed from a knowledge of its history. So if Ken is using the term “operational science” to identify the physical sciences then it is clear that they cannot be separated from our grasp of history. They are so seamlessly integrated with history as to be inseparable from it.

Ham and Lacey, however, don’t base their distinction on the ontological difference between history and the physical laws that generate history. Instead they attempt to prise apart the physical sciences from the historical sciences using an epistemic distinction. Accordingly Lacey writes:

….we have stated that neither creationism nor cosmic evolution nor Darwinian biological evolution is observational science, and they are not observable, testable, repeatable, falsifiable events. Therefore, we would state that you cannot “empirically prove” them.

That statement is actually more of less correct in my opinion. But the catch is that in an absolute sense it characterizes all our theoretical constructions right across the board, historical and otherwise! The ontological distinction between the general functions of physics and “happened” events doesn’t make those general functions in principle any more observable, testable, repeatable and falsifiable. In particular it would be entirely wrong to claim that very abstract universal functions, like say various quantum equations, can ever be “observed”; in fact we never observe our theoretical objects, we only sample some of their consequences. 

It is perhaps no surprise that given fundamentalism’s taste for dichotomy Lacey appears not to understand that observability, repeatability, testability, and falsifiability are not “on or off” properties, but properties that come in degrees, depending on how deeply embedded an object is in the logical nexus of our theoretical structures. Atoms and fundamental particles, for example, are relatively logically remote from observation and can hardly be classified as “observable”.  Nevertheless they are part of a theoretical narrative that successfully embeds much of our experience. The centre of the Sun is not “observable” but once again the subject of theoretical constructions which explain much about star behavior. Moreover, given the limit on the speed of light then the Sun is clearly an historical object, as in fact are all physical objects whose evidential signals, to a lesser or greater degree,  arrive at our doorstep with a time delay. Repeatability is a big issue in the physical sciences; physical systems are a complex of variables that conceivably could change erratically in ways which compromises the rigor of experimental repeatability. No experiment can ever be considered repeatable without making the reasonable assumption that a rational uniformity pervades our cosmos that could otherwise change in quite irrational and cussed ways. The existence of Dark Matter events (if it exists) cannot be tested at will but instead we have to sit it out until they happen. As Popper himself admitted, no theory is absolutely falsifiable: The reason being is that our theoretical narratives form a logical structure that is complex enough to contain many adjustable variables which can conceivably be varied to insulate a theory from unequivocal falsification.

As I was saying observability, repeatability, testability, and falsifiability are not “on or off” properties, but graded properties. For example our narratives about giraffes are more observable, repeatable, testable, and falsifiable than are narratives about brachiosauri and our narratives about brachiosauri are more observable, repeatable, testable, and falsifiable than narratives about big foot. So the problem for Ham and Lacey is where do they draw the line? At what point do they tell us that an object is not observable? What they don’t understand is that distance in time is not the only variable that impacts an object’s epistemic accessibility. Galaxies are millions of light years away and yet their behaviour is more epistemically accessible than some complex social objects, like say the relation of economics to cultural attitudes. The abstract problem in science is the same across the board, namely that of attempting to embed accepted data points into a theoretical structure whether that structure is historical or is everywhere and everywhen. The theoretical narratives of science are in this sense timeless; they are quasi-platonic objects that attempt to explain the agreed data samples; the epistemic accessibility of those explanatory objects is not just a function of time. If one attacks the epistemic status of historical science on the basis that Lacey attacks evolution  then because the essential epistemic method  across science is uniform a precedent is set for the subversion of the whole of science.
***

Fundamentalists are particularly bad at detecting graded phenomenon perhaps because they habitually live in a world of black and white. Their very tribal concept of the social world is the key to much fundamentalist thinking and attitude. Their sectarianism demands clear cut social demarcations; in such a context there is need to distinguish the saints from the heretics, the sheep from the goats, the divine from the demonic, the devout from the godless, the heroes from the evil conspirators, the goodies from the baddies. Therefore in fundamentalist circles there is a call for shibboleths, faith tests and above all an epistemic arrogance that gives them certainty in the discernment of truth and error. In fact one can see hints of this phenomenon in the heading of Lacey’s article where it is clear that any who think differently are going to be accused of using deceitful terms, thus casting a moral light on an intellectual issue. Lacey and Ham are out to get convictions for heresy and blasphemy on non-tribe members and they will use their incompetent grasp of science to assist them. They therefore badly need what they fancy to be a fundamental distinction between observational and historical science to help them sort out the sheep from the goats.


To be continued…. The Diet of Worms. Pathological science: Same data but different interpretations?  Mature creation theory? Geocentricity? The pathological coordinate transformations of John Byl and Jason Lisle.

Tuesday, April 09, 2013

More on Self Organisation and Richard Johns.

Self Organising Ants in a Sugar Jar.

Organic cells and their machinery act as a kind of recipe or algorithm from which full sized multi-cellular organisms “unwind”. There is here, therefore, an effective map from a single cell configuration of atoms to the large scale multi-cellular configurations they generate.

Organic cells may be small but they are nevertheless large atomic configurations of many billions of atoms. Even so, in terms of atomic count they are a lot smaller than the multicellular organisms they define. Since multicellular organisms “unwind” from single cells, the potential in single cells amounts to a kind of data compressed way of describing the organism they generate *1

Embarking on a computation that used the method of randomly searching for a configuration of atoms that constituted a viable multicellular organism would be an impractically long job because obviously the class of viable organisms is very likely going to be an extremely small fraction of all possible configurations of atoms.

However, a computation to find a viable organic form could be shortened considerably by randomly searching for a single cell that mapped to a viable organism; after all, a single cell contains a lot fewer atoms. Even so, it would still, of course, be an impractically long job. But the principle is there: Searching for the single cell “algorithms” that generate living forms is a lot less computationally complex than directly searching for the full grown organism. *2.

Since cells effectively constitute algorithmic encodes of the multicellular organisms they generate then, as I have already said, they can be thought of as “data compressed” versions of the much more extensive multicellular configurations. The question naturally arises, then, as to whether further data compression can take place. That is, can organic forms be encoded in even more succinct “algorithms” than single cells? I am, of course, thinking of our physical regime as it is encoded in the laws of physics and the question of whether via “self-organization” (=OOL and evolution) living structures are implied with a realistic probability given the age of the cosmos. Measured in the bit lengths needed to define them, these laws are going to be expressible in something quite a bit shorter than that needed to directly encode a multicellular form or even a single cell.

Let’s just say for the sake of argument that the laws of physics can be expressed in a few thousands of bits of information. That’s definitely going to be quite a bit shorter than the trillions of trillions of bits needed to describe organic forms directly.  So, if a suite of physical laws exist which imply a universe capable of generating life in reasonable time, then randomly searching through a “mere” few thousand bits is a lot quicker than randomly searching the number of bits needed to describe a working organic form whether unicellular or multicellular. *3

***

When I suggested to IDist Richard Johns that it is still yet possible (given the state of our knowledge) that our particular physical regime may have given rise to organic forms via self-organisation he said:

But even in that case, self-organisation theories of evolution will be in a difficult position. For they will then be committed to the claim that living organisms are algorithmically (and dynamically) simple. In other words, living organisms are like Pi, merely *appearing* to be complex, while in fact being generated by a very short program. (Vastly shorter than their genomes, for example.)

My reply was:

One more thing: Imagine that you were given the problem of Pi in reverse; that is you were given the pattern of digits and yet had no clue as to what, if any, simple algorithm generated it. The hard problem then is to guess the algorithm – generating Pi after you have found the algorithm is the easy problem. So to me life remains algorithmically complex even if it’s a product of Self-Organisation

And that latter statement becomes clearer in the scenario I have been developing above: For if a suite of laws exist capable of generating organic forms with reasonable probability within reasonable time then we still have to search through all the combinations available to thousands of bits: In terms of the age of the cosmos that is still a hugely impractically long job! So even if life can be generated by the laws of physics life it can hardly be classified as algorithmically simple! Yes, life would be algorithmically simpler but far from simple in an absolute sense!


Footnotes:

*1 Strictly I am not talking here about data compression as it is understood practically in data processing where an algorithm must create a one to one map between a compressible object and its compressed version and allow a reversible transformation from one to the other to take place. In the more general idea I am putting forward here the transformation may be neither reversible nor one to one. (In fact, as we have in “lossy” Jpeg compression). Moreover, physical algorithms don’t necessarily have the halting properties that effectively say “we have arrived at the required configuration!”.  In the “compression” I'm talking about here, all I'm looking for is a general enlargement of the size of the data field via the application of the physical algorithms: The map here is a much more fuzzy concept whereby the law and disorder suite is simply required to generate viable organisms with a reasonable probability within reasonable time.


*2 Clearly a random search is certainly not the most efficient way to search an exponential tree! Systematic searching may arrive at a solution much faster (although in worst case scenarios systematic searching can be even longer than a random search). However, random search, in having no particular systematic bias (i.e. no initial information) can be  used as a kind of standard measure of computational complexity; it’s a bit like defining the distance to the stars using the time a snail would take to crawl there. Using random search as the fundamental unit of complexity measure only has the effect of introducing a very large constant of proportionality in the exponential expression for computation time. Random search, as it were, gives us a likely upper limit on the search time with respect to a process defined with respect to a kind informational absolute zero; that is, it is measured from the absolute zero of a “know nothing,  learn nothing” perspective.

 *3 Here I haven’t mentioned that the computation required to determine whether we have reached a life generating universe includes a (polynomial time) verification that it is capable of generating a life; the only sure-fire way of doing this is to run the scenario through  to see if it does produce life, which of course would require a universe life time! This simply changes the constant of proportionality in the time complexity expression.

Plantinga Catches up on Unstable Self Reference.



The above is 2 minute video of Alvin Plantinga on the subject of destructive self reference. I looked into this subject twenty years ago in 1993. See here: http://quantumnonlinearity.blogspot.co.uk/2008/08/how-to-know-you-know-you-know-it.html  

Tuesday, April 02, 2013

Once More into the False Dichotomy Zone: "Naturalism vs. Design".



(Picture from http://www.faradayschools.com/re-topics/re-year-10-11/god-of-the-gaps/ )

Established evolutionary theory may be a good theory, but I wouldn't say I'm 100% convinced and I’ll continue to read with interest the views of the de-facto Intelligent Design community.  But in spite that I certainly don’t share what appears to the de-facto IDist’s well-motivated anti-evolutionary complex. On occasion this underlying complex manifests itself in a strenuous drive to find in principle refutations that short cut the work of debunking evolution. An example of this is Granville Sewell who is thoroughly beguiled by a belief that the 2nd law of thermodynamics provides in principle refutation of evolution.

Before I go any further with a critical look at a particular post on the IDist web site Uncommon Descent let me make it clear that I at least agree with the de-facto IDists on this: One can’t do natural history without assuming as a starting point a world with an initially high burden of information. That is, our world could not be as it is without being resourced by some very rare conditions, whether of actual up and running configurations or the appropriate algorithmic generators of those configurations. If from either of these starting points we use the principle of equal a-priori probabilities to convert rarity of case into high improbability, we infer that our cosmos contains an irreducible burden of information; in a nutshell this is William Dembski’s result. If one is so inclined this inevitable logical hiatus readily takes us into theology, but I won’t touch that subject here. Suffice to say that I agree with Dembski’s core thesis that the cosmos’s burden of information is irreducible. In fact it even applies to multiverse scenarios.

The recent Uncommon Descent post I am considering here, however, deals with less fundamental questions and serves to highlight where I would be depart from UD thinking.  Below I quote this post in its entirety and interleave it with my own commentary.

March 23, 2013           Posted by kairosfocus under Intelligent Design:-        
EA nails it in a response to an insightful remark by KN (and one by Box): “the ability of a medium to store information is inversely proportional to the self-ordering tendency of the medium”
6 Comments
Here at UD, comment exchanges can be very enlightening. In this case, in the recent Quote of the Day thread, two of the best commenters at UD — and yes, I count KN as one of the best, never mind that we often differ — have gone at it (and, Box, your own thoughts — e.g. here — were quite good too  ).

My Comment: The reason why Kairosfocus favours the two commenters he is about to quote is because he has very much backed the law and disorder vs. design dichotomy and he is quite sure that law and disorder have not generated life. He’s of the school of thought that once you have eliminated law and disorder from the enquiry that leaves intelligent design as the explanation for living structures. The trouble with this view is that it can give the false impression that law and disorder vs. design are mutually excluding categories. I have expressed doubts about this dichotomy on this blog several times. However, to be fair to Kairosfocus I’m sure he understands that if our cosmic law and disorder regime has generated living configurations then there still remains the irrefutable work of Dembski, work that, as Dembski admits, doesn’t in and of itself contradict evolution. 

But if Kairosfocus is right and the cosmic law and disorder regime is inadequate to generate life then this means that “design theory” becomes a very accessible and compelling argument; it is easy to picture some kind of homunculus molecular designer piecing together the configurations of life much like a human engineer. The OOL/evolutionary alternative requires one to grasp some rather difficult to understand notions employing information theory, fundamental physics and algorithmics.

Anyway, continuing with the UD post:

Let’s lead with Box:
Box, 49: [KN,] your deep and important question *how do parts become integrated wholes?* need to be answered. And when the parts are excluded from the answer, we are forced to except the reality of a ‘form’ that is not a part and that does account for the integration of the parts. And indeed, if DNA, proteins or any other part of the cell are excluded from the answer, than this phenomenon is non-material.


My Comment: This, I think, is an allusion to one of the de-facto ID community’s better ideas; namely irreducible complexity. In non-mathematical terms irreducible complexity can be expressed as follows: Organic components can only exist if they are part of an organic whole that maintains their existence. But conversely the survival of the organic whole is dependent on the individual components surviving. In other words we have mutual dependence between the parts and the whole. So, since organic wholes depend on parts and parts depend on organic wholes it appears that this mutual dependence prevents an evolutionary piecemeal assembly of an organism from its parts. The conclusion is that each organic form came into existence as a fait accompli.  However, this logic has a loop hole that evolutionists can exploit. The kind of incremental changes that can be conceived are not stuck at the discrete level of mutually dependent parts. It hardly needs to be said that organic components are composed of much more elementary components than organic parts, namely fundamental particles. Therefore the question naturally arises as to whether the organic parts themselves can be incrementally morphed at the particulate level and yet still leave us with a viable stable organic whole. This, of course, takes us into the fundamental question of whether configurations space with respect to these incremental changes is reducibly complex, a concept defined in the post here. But as I mention in that latter post there is an issue with reducible complexity: Given that the number of viable organisms is likely to be an all but negligible fraction of all the total possible configurations of atomic parts, it is certainly not obvious to me that a practical reducible complexity is a feature of our physical regime. But conversely I can’t prove that it isn’t a feature!

The point I am making here is that because the UD comments above remain at the discrete “part” level rather than the more fundamental particulate level they don’t scratch the surface of the deep theoretical vistas opened up by the reducible complexity question. But there is, I’ll concede, a prima facie case for the de-facto ID community’s skepticism of evolution, a case that particularly revolves round the idea of irreducible complexity; although this skepticism appears to be motivated by a narrowness of perspective, namely, the perspective that “Design” and “Naturalism” so called (i.e. OOL and evolution) are at odds with one another.

Now, it may well be that evolutionary theory as the scientific establishment conceives it is wrong, perhaps because irreducibility complexity blocks the incremental changes evolutionary theory demands. But one feels that if evidence came to light that unequivocally contradicted the defacto-ID community’s anti-evolutionism (if such is possible) it would mean a very drastic revision of their “design vs. nature” paradigm. The kind of argument above regarding the apparently all-or-nothing existence of organic structures, although in some ways compelling, is certainly not absolutely obliging. The UD argument I have quoted regarding the holistic nature of organisms does not classify as a killer “in principle” argument against evolution. The de-facto ID community is very enamored of the metaphor of the intelligent homunculus who works like a human engineer in contradistinction to the so-called “naturalistic” evolutionary mechanisms. But there is a great irony here: If physical regimes implying reducible complexity have a mathematical existence then the computational resources needed to find and implement such a regime could be put down to an intelligent agent. Ironically then, using the very principles the de-facto ID community espouse, a workable evolution can hardly be classified as “natural” but rather very “unnatural” and moreover evidence of a designer! If the de-facto IDists are prepared to espouse an all but divine designer, such a designer could be the very means of solving the problems of selecting a physical regime where OOL and evolution work!

KN, 52:  the right question to ask, in my estimation, is, “are there self-organizing processes in nature?” For if there aren’t, or if there are, but they can’t account for life, then design theory looks like the only game in town. But, if there are self-organizing processes that could (probably) account for life, then there’s a genuine tertium quid between the Epicurean conjunct of chance and necessity and the Platonic insistence on design-from-above.

My Comment: Self-organization, so-called, is not of necessity a tertium quid; it could yet be the outcome of a carefully selected Law and Disorder dynamic. In fact if evolution and the necessary OOL processes that must go with it are sufficient to generate at least an elementary form of life this would classify as “self-organization”. Richard Johns, who is an IDist, would agree on this point. In a published paper Johns probes the subject of self-organization using a cellular automata model. Cellular automata are based on a law and disorder paradigm and make use of no tertium quid. Of course, as a de-facto IDist Johns is somewhat committed to the notion that this form of self-organization cannot generate life, but his paper does not succeed in proving the case either way. In fact in order to support his prior commitment to the inadequacy of self-organization he hamstrings law and disorder as a means of self-organization with a habitual mode of thinking that has become fixed in people’s mind ever since Richard Dawkins coined the phrase “The Blind Watch Maker”. In Johns’ case he applies the general idea behind the Blind Watch Maker by taking it for granted that the law and disorder algorithms controlling his cellular system are selected blindly. Since it is a likely conjecture that life generating law and disorder systems are extremely rare cases amongst the class of all possible algorithmic systems (if indeed they have mathematical existence at all) then clearly blind selection of the cellular algorithms is unlikely to give us a system that generates living configurations! But if Johns believes in an omni-intelligent agent of open ended powers then that agent could just as well express itself through the selection of just the right life generating regime (assuming it has a mathematical existence) as contrive living configurations directly.  Given the ID culture Johns has identified with, he is likely to think of self-organization as a “naturalistic” method of generating life and so he hamstrings this notion by simply not allowing it to be set up via intelligent agency. Of course, if you disallow intelligence to express itself in this way and insist on the selection of physical regimes on a blind random basis then you are not likely end up with a life generator!

Notice that in the quote from KN he too is inclined to see self-organization and design theory as two competing scenarios whereby elimination of one leaves the other as the “only game in town”.   In fact self-organization is mysterious enough to KN that it classifies as neither law-disorder nor design, but a tertium quid. The naturalism vs. intelligence dichotomy is so fixed in his mind that it has never occurred to him that self-organization of the law and disorder variety leaves us with similar issues of logical haitus and computational complexity as does the idea that living configurations are simply a fait accompli.  He just doesn’t make a connection between the large measure of computational complexity implicit in the selection of the right physical algorithms and a design decision! I see this as yet another manifestation of the false dichotomy of God did it vs. Naturalism did it.

Self-organization is, in fact, a very bad term. The elementary parts of the cosmos could never self-organize but only do so because an imposed and carefully selected physical regime controls them. The term “self” is yet another subliminal signal of the “naturalistic” view that somehow the elementary parts of the cosmos possess some power of organization in and of themselves. But think about it: That’s not unlike claiming that the bits in say a Mandelbrot set have the innate power to organize themselves into intricate patterns!

EA, 61: . . .  the evidence clearly shows that there are not self-organizing processes in nature that can account for life.
This is particularly evident when we look at an information-rich medium like DNA. As to self-organization of something like DNA, it is critical to keep in mind that the ability of a medium to store information is inversely proportional to the self-ordering tendency of the medium. By definition, therefore, you simply cannot have a self-ordering molecule like DNA that also stores large amounts of information.
The only game left, as you say, is design.
Unless, of course, we want to appeal to blind chance . . .

My Comment:  EA is probably right about there being no evidence for self-organization; but only as an extra tertium quid factor. There is of course evidence for evolution as a form of self-organization arising from a cellular automata system, but just how obliging this evidence is and just how successfully the theory joins the dots of the data samples is what the debate is about!

 EA’s point about the conflict between information storage and self-organization is I think this: Self organization, at least as it is conceived by Richard Johns and myself, is a highly constrained process; though it may generate complex forms it nevertheless has low redundancy in as much as it is not possible to arbitrarily twiddle the bits of a self-organized configuration without the likelihood of violating the algorithmic rules of this process. In contrast arbitrary information storage allows, by definition, arbitrary bit twiddling and therefore one can’t use a self-organized system to store any old information. Self-organization only stores the information relevant to the form it expresses. For example I couldn’t arbitrarily twiddle the bits of a Mandelbrot set without violating the rules of the algorithm that generated it.

However, I believe EA has misapplied this lesson with some hand waving. If OOL and evolution have generated life using the algorithms of a cellular system it would classify as self-organization (albeit with “self” being a complete misnomer). OOL and evolution would work by virtue of the selection of what is likely to be a very rare algorithmic case and this rarity would imply a corresponding high level of information. Self-organized systems are algorithmic ways of storing the information found in the complex patterns they generate. Ergo, EA’s point about self-ordering systems and their lack of ability to store information is misleading; true they can’t store information about systems other than the forms they define, but they nevertheless do store information of a special kind.  What I think EA really means is that self-ordering systems can’t store arbitrary information.

The type of “think” that EA displays here is reminiscent of  an argument I once saw on UD (although I’ve lost the exact chapter and verse). It went along these lines: Self-organization requires “necessity”. Necessity implies a probability of 1 which in turn implies an information of zero. Therefore self-organization can’t store information. This argument is false and appears to  be based on the misleading connotations of the word “necessity”. What these IDists refer to as “necessity” is simply algorithmic constraint. Since the set of all algorithmic constraints is very large then the selection of a particular suite of constraining algorithms is highly contingent and is hardly a “necessity”. Conversely, a book of random numbers to the observer who first comes to it is very "contingent" and thus stores lots of "information". However, once the observer has used the book and committed it to memory, it's information value is lost. "Information" is observer dependent. In fact depending on the state of the observer's knowledge so-called "necessity" can be packed with information whereas so-called "contingency"  may have a zero information content.

EA, in thinking that he has chased self-organization out of the town, invokes the habit of mind which automatically separates out self-organization and design as two very distinct processes. He consequently concludes that design is the only game left in town. EA expresses no cognizance of the fact that, using William Dembski’s principles, he has also chased away what itself could classify as a form of design: For high improbability is also likely to found in the selection of the rare algorithmic cases needed to make self-organization work.

Kairosfocus finishes with this:

So — noting that self-ordering is a species of mechanical necessity and thus leads to low contingency — we see the significance of the trichotomy necessity, chance, design, and where it points in light of the evidence in hand regarding FSCO/I in DNA etc. END

My Comment: This statement identifies mechanical necessity with low contingency; I think that’s intended to suggest that mechanical necessity cannot be the information bearer required for life; a conclusion that as far as I’m concerned may or may not be true. 


***
Let me stress that I have no vested interest in evolution as a theory and will continue to follow the views of the de-facto IDists with great interest. But I certainly would not argue against evolution along the above lines.