Showing posts with label Reducible Complexity. Show all posts
Showing posts with label Reducible Complexity. Show all posts

Saturday, September 28, 2019

Evolution: Naked Chance?

Image result for meccano set jumble
A set of basic construction parts; throw in a few gluons (i.e. nuts and bolts) and then agitate randomly. Will this mix generate self-replicating, self-perpetuating configurations? There is a way of doing it but is this The Way it happened in our cosmos? (Probably not!)

This post is a response to biochemist Larry Moran's essay entitled "Evolution by Accident".  In his essay he leans toward the view that randomness  and serendipity play a big very big role in evolution. He sets himself against the natural selectionists whom he characterises as supporting a "non-random" view of evolution. In his conclusion he writes:

I've tried to summarize all of the random and accidental things that can happen during evolution. Mutations are chance events. Random genetic drift is, of course, random. Accidents and contingency abound in the history of life. All this means that the tape of life will never replay the same way. Chance events affect speciation. All these things seem obvious. So, what's the problem?
The "problem" is that writers like Richard Dawkins have made such a big deal about the non-randomness of natural selection that they risk throwing out the baby with the bathwater. A superficial reading of any Dawkins' book would lead you to the conclusion that evolution is an algorithmic process and that chance and accident have been banished. That's not exactly what he says but it sure is the dominant impression you take away from his work.

Here's another example of the apotheosis of chance in Moran's writings:

What about Monod's argument that evolution is pure chance because mutations are random? Doesn't this mean that the end result of evolution is largely due to those mutations that just happened to occur?

There is no need for me to take sides in this debate between evolution by natural selection and evolution by "pure chance". In fact for the sake of my argument I could proceed under any mix of natural selection and so-called "pure chance".  What I want to show here is that yes, current notions of what drives evolution entail a big random factor, but it is only one aspect of evolution and there are other aspects which are even more significant.

My point will be this: Evolution is driven by chance, but it certainly isn't naked chance; in fact overall the process is very, very far from naked chance. But reading Moran's essay one could be forgiven for thinking he's pushing naked chance too far and has no awareness of how ordered, a priori, the world must really be for evolution to work. The second quote above from Moran would be less misleading about the process of evolution if "pure chance" were replaced by "chance". Evolution cannot be pure chance as we shall see. Moran is either oblivious to this fact or sees it as not worthy of note; but I see it as highly significant, in fact evolution's most significant feature.  The fact of the matter is that the chances in the process of evolution must be subject to a highly constrained envelope if it is to work.

This apparent obliviousness of Moran to the organising envelopes which must constrain the chance diffusion of evolution may trace back to Moran's very partisan form of atheism and the related sentiments I identify in my post on the many worlds cosmology: The purposeless backdrop of the atheist world-view has grave difficulties in making sense of any built-in cosmic bias or preference toward certain states of affairs; in particular an ordered status quo. This bias leads to tricky questions like why this and not that?  These questions in turn may raise what to some is the demon of "purpose" and from there it is short walk into at least a conjectured theism. Partisan atheism finds any contingent bias in the cosmos uncomfortable and finds it easier to handle a cosmos where everything is evenly favoured with the "butter" of probability spread uniformly, thus betraying no sense of preferred statuses in the cosmic dynamic (See here).

My arguments follow even if natural history is driven by some mechanism completely different to standard evolutionary mechanisms. This is because my argument is about logic; that is, it is of the kind "If so & so then it follows that......etc".  Where "so & so" stands in for "standard evolution" which may not actually hold good (and I must confess I have my doubts!). What concerns me here, however, is what follows if we assume standard views of evolution. Therefore the following proof can be advanced without placing intellectual stakes in particular evolutionary mechanisms as they are currently conceived

***

In my post on The Mathematics of the Spongeam I used the following equation as a way of talking about evolution:



This is basically a diffusion equation with an added term and where Y represents a population value subject to diffusion in a multidimensional space. The first term on the right hand side is the ordinary diffusion term expressed in n dimensions and this is a way of representing a random walk across configuration space. The second term introduces the net result of multiplication and death at a particular point in configuration space. As I pointed out in my previous post on this equation, although it encapsulates many complexities it is a huge simplification of reality; for example the diffusion constants in front of the "house2" symbol could vary with dimension and coordinate. Moreover, as  I also pointed out in my previous post on this equation, as it stands it doesn't explicitly acknowledge an important potential non-linearity. Viz: that if V depends on the environment then it is clear that the value of Y is part of that environment. Therefore V is not just a function of the coordinate system but also of Y. However, in spite of these simplifications to the point where the whole equation is almost a cartoon I can nevertheless use it to express my problems with Moran's brand of thinking.

The diffusion term in the above equation expresses that important feature of evolution; namely, that it proceeds in small random steps. Given this picture one thing becomes very clear: It is fairly intuitively obvious that pure random walk would never produce what we are looking for in terms of the highly organised complexities needed for self perpetuating, self replicating structures. If in the very remote chance any organised complexity was arrived at via random walk that walk would ensure that it would very quickly dissolve back into the sea of general randomness.  For organised structures to have a chance of persisting for any length of time we need the second term on the right hand side of the equation, that is Y. Where configurations have a net replication V will have a positive value; where populations have a net decay V will have a negative value. However, since we expect that the number of configurations which have sufficient organised complexity to return a net value of V to be very small compared to the whole of configuration space then this doesn't give us very many viable self replicating configurations to spread across configuration space. Hence, relatively speaking configuration space will be almost empty of self replicating configurations; we know this simply because a successful replicator will clearly have to be sufficiently organised and the number of organised structures in the whole of configuration space constitutes a very tiny percentage of the whole of configuration space  (See my book on Disorder and Randomness).

Now here's the rub: Since evolution must start from square one (i.e no replicators) the set of self perpetuating, self replicating configurations must be fully connected all the way from the most elementary replicators to sophisticated organisms. This requires V to be such that it forms channels in configuration space favourable to replication, along which evolutionary diffusion can diffuse. To assist the imagination in visualising this connected multidimensional set I use my picture of the spongeam. Viz:



The above is a three dimensional visualisation of what is in fact a multidimensional object; it is  system of very tenuous fibrils spanning across largely empty space. The spongeam may or may not actually exist, but we at least know this; it is a necessary pre-condition of evolution, at least evolution as it is currently understood.

This picture of the spongeam actually prompts a pertinent question: If replicators, which necessarily have to be highly organised, are so utterly dwarfed in number by the immerse size of configuration space, are there actually enough of them to populate configuration space with a class of points sufficiently connected to allow evolution by diffusion from square-one to advanced and complex organisms? I have my doubts but for the sake of the argument we will proceed as if there is a spongeam sufficiently connected to allow diffusional migration from simple structures to complex self replicating organisms.

Whether a spongeam exists to facilitate evolution depends very much on the form of V.  It is possible to "cheat" and simply patch pathways and channels into configuration space ad hoc style to ensure that a wide range of replicators can be reached by diffusion. But such special pleading doesn't seem to be how our cosmos works; its fundamental parts are more like a construction kit such as meccano where a few fairly elementary parts and their "fixing" rules are specified and we are then left with the problem of whether such a system, given the diffusion dynamic, can build or "compute" general replicators. Let me say again: I have my doubts, but that's really another story**. Suffice to show here that given current understandings of evolution Larry Moran's assertion about it being a process of "pure randomness" is entirely misleading.

In the spongeam picture the difference between natural selection and Moran's emphasis on the randomness of the evolutionary walk is a fairly minor difference: In Moran the neutral diffusive evolutionary  "channels" of the sponeam are akin  to "level" pathways where there is no bias pushing in a particular direction. Natural selection on the other hand corresponds to the case where the pathways have a kind of slope by virtue of a changing slope in V which means that the random walk is a walk with a bias in a particular direction and therefore proceeds more rapidly in that direction. But as far as my equation is concerned this is just a variation on a theme and evolution may proceed under both circumstances. (The irony is that if evolution has occurred then I'm very favourable to Moran's concept of evolution!: It minimises the idea of evolution as a fight for survival and instead something more like the drifting apart of languages  when communities are separated)

But whatever! My the main point is that all this exposes Larry Moran's misleading view on evolution; for whether evolution is neutral or biased both scenarios require a spongeam envelope that introduces a considerable information constraint; that is, evolution is a process which in this sense is far from maximum randomness and presupposes a highly organised constraint spanning configuration space.

I'm not here arguing whether or not the spongeam actually exists in the real world: Rather I want to make the point that if it does exist - as it must if evolution has occurred as currently conceived - it does no justice whatever to describe the process of evolution as if it is pure randomness at work: Pure randomness would lead to nothing: If the spongeam does exist in our cosmos then presumably its convoluted network of fuzzy pathways is determined by the set of fundamental particle interactions.

That evolution must start with a huge information bias has been proved generally by William Dembski. Dembski has been unjustifiably abused for his efforts; a sign of the worldview stakes involved in the debate. Although I don't accept the inference that some IDists have thence drawn from Dembski that information can't be created, Dembski's result is sound. I present a back of the envelope proof of this theorem in this paper.


Relevant Link:
http://quantumnonlinearity.blogspot.com/2018/01/evolution-its-not-just-chance-says-pz.html

Footnote
** The de facto "Intelligent Design" community are quite dogmatic on this point: For them there is little or no doubt that given the cosmic physical regime evolution, as currently understood, is impossible and that "Intelligence" of some sort (they don't elucidate what sort) needs to step in somehow (they don't elucidate how) and do its stuff of filling in the creative gaps in the creation story. The irony is that what raises a question mark over their thesis is the very existence of the super-intelligence they posit. For if that intelligence is none other than an omniscient omnipotent God then who knows what such a being is capable of: For all we know there may be a set of particle interactions which imply a spongeam and that God may have chosen that set! (although as I must repeat, I am myself doubtful about the existence of the spongeam - but I might be wrong!)

Tuesday, April 17, 2018

North American IDists Screw Up Irreducible Complexity Definition



This post has exactly the same title as a previous post where I criticized de facto Intelligent Design's concept of "irreducible complexity". The original post was here:


Because the ID community have taken on board such a poor definition of irreducible complexity it is no surprise that a serene but sarcistic sounding PZ Myers has a field day:



The de facto ID community don't appear to understand how hard it is to determine if irreducible complexity, when  properly defined, is a feature of the configuration space of our physical regime.  This is how I defined it in this post

Irreducible/reducible complexity: I don’t use these terms in the sense of Micheal Behe’s flawed concept of irreducible complexity. Irreducible complexity and reducible complexity as I conceive them are to do with how stable organic structures are laid out in configuration space. If a set of structures are reducibly complex they form a connected set in configuration space: This means that the diffusional computational process of evolution can bring about considerable change in organic structure. Irreducible complexity, on the other hand, is the opposite. That is, when such structures are widely separated in configuration space it is not possible for evolutionary diffusion to hop from one organism to another. Irreducible complexity, if defined properly (that is, not in the Behe sense), is an evolution stopper.

I'm going do a quick recap of the line of argumentation that I have presented more fully in the following blog posts:

http://quantumnonlinearity.blogspot.co.uk/2014/08/melencolia-i-part-3-sharpening-focus.html
http://quantumnonlinearity.blogspot.co.uk/2015/06/algorithms-searches-dualism-and_13.html
http://quantumnonlinearity.blogspot.co.uk/2015/11/intelligent-designs-2001-space-odyssey.html
http://quantumnonlinearity.blogspot.co.uk/2017/07/melencolia-i-project-articles.html

If we take a large collection of "hard" particles like, say, marbles and we imagine them to be agitated so that the particles display a random walk dynamic clearly no organised structures will persist long enough for there to be any hope of self-perpetuating, self-replicating structure forming. For if by chance order should come about it would immediately start to evaporate. But in our real world the component parts aren't just hard particles - the particles interact by attraction as well as repulsion; these interactions are such that particles stick together. This is a bit like taking a set of meccano  parts and then adding "nuts & bolts"; these extra "particles" we call nuts and bolts imply that certain configurations will hold together once formed. Adding particles that act as a kind of "glue" is a way of constraining or limiting the number of possibilities in favor of order. But for standard evolution to work these constraints of "interaction" must:

1. Sufficiently reduce the size of  configuration space.
2. Concomitantly define a sufficiently large set (relative to the size of configuration space) of self-perpetuating, self replicating structure,
3. ...so that the class of self-perpetuating, self replicating structures forms a connected set in configuration space (the "spongeam") allowing for a diffusion dynamic to diffuse through this class.  

My own intuitions are that our physical regime is not constraining enough and therefore the spongeam is too attenuated a structure for it to return a realistic probability for the evolution of life: Hence my speculative Meloncolia I project. But in the absence of strong theoretical treatment I suppose we are rather thrown back on the empirical evidence that the PZ Myers of this world are telling us about.  We also need to remember that they are certainly not claiming that "evolution is just chance": See here for example:

http://quantumnonlinearity.blogspot.co.uk/2018/01/evolution-its-not-just-chance-says-pz.html

It is because standard evolution must tap into an a priori source of information, a source even acknowledged by atheists, that Christian scientists like Ken Miller, John Polkinghorne and Denis Alexander can not be accused of  claiming that "blind natural forces" (sic) created life. Ironically what scuppers the de facto IDists argument that "blind natural forces" (sic) can't create life is an internal inconsistency in their argument; namely, that "natural forces" created, managed and sustained by an omniscient intelligence are not going to be either "natural" nor "blind"!

Saturday, June 13, 2015

Algorithms, Searches, Dualism and Declarative Computation. Part 4

If conventional evolutionary theory and OOL are to work then configuration space must look something like this, with thin strands of survivability/stability/self-perpetuation permeating configuration space: Forget fitness surfaces!

This is the last in the series where I am looking at a post by Joe Felsenstein and Tom English (FE). In their post FE critique a paper by Intelligent Design gurus Dembski, Ewert and Marks (DEM).  The other parts of this series can be seen here, here and here

In their critique of DEM’s paper FE construct a simple toy model of evolution. This model they call the “Greedy Uphill Climbing Bug” (GUCB) – basically it is an algorithm which always moves in the direction of greatest “fitness” – that is in the direction of greatest slope of the “fitness surface” or “fitness field”, a field which is conjectured to pervade configuration space. The point they make from this model is simple and valid: It shows that a smooth fitness field (a smoothness which is a likely consequence of the laws of physics as Joe Felsenstein points out), when combined with a GUCB algorithm, entails a lot of “active information”. In fact FE find that even a white noise fitness field also implies a lot of “active information”.  “Active Information” is a term used by DEM to refer to the inherent up front informational bias of a search, a bias which means that the search does better than random chance.

Admittedly FE’s model is not a very realistic; but as FE state it needn’t be realistic to make their point: It simply shows that fairly basic mathematical conditions can supply a considerable level of active information. However, in spite of FE’s work I have to confess that I have doubts about the efficacy of the current model of evolution as a generator life. To see why let’s take a closer look at the GUCB algorithm.

FE put their GUCB algorithm in a configuration space implied by a genome of 3000 bases. At any given starting position the GUCB senses the direction of greatest slope (i.e. greatest fitness) and then moves in that direction. Now, how is this apparently simple algorithm realised in terms of real biology? Well, firstly in order for the GUCB to determine which direction to move it must first put out feelers in no less than 3000 directions. In real biology it is claimed that this is not done by systematically working through the 3000 cases, but by the algorithmically inefficient trial and error steps of random mutation. Once a mutation enters a population it must then be tested by the complex selection processes which determine structural viability, environmental fitness and competition. If it survives this test – which may take several generations of progeny –  then the organism can move on; if it fails, the process must start all over again with a new randomly selected mutation. So, it is clear that successfully moving in any direction at all  in configuration space entails, in real biology, a very large number of computational operations and trials.

Now, I’m not going to contradict the contention that perhaps given large tracts of geological time real biology is capable of moving forward in this laborious random-walk trial and error process: I’m not a mathematical biologist so I won’t make any claims on that score. But one thing is very clear to me, and it should be clear to anyone else: Viz, because the stepping process consumes so much time the structure that is doing the stepping must be stable and endowed with sufficient persistence and self-perpetuation in the first place. That is, although the structure may not be of the “fittest” quality, it must nevertheless be fit enough to make the next step in configuration space. If the structure wasn’t at least this fit it wouldn’t survive long enough to move anywhere in configuration space.  So, implicit in the GUCB model is the assumption that the points in configuration space are all fit – that is, capable of survival long enough to step through configuration space.

It is the property of stability/survivability rather than fitness that begs the big 64 million dollar question. Namely, is configuration space populated with enough stable self-perpetuating structures to provide the pathways on which the GUCB can move?  For the GUCB to move around in configuration space the class of stable structures must be a continuously connected set.  That is, for standard evolution to work the self-perpetuating structures in configuration space must form a reducibly complex set*; that is, working, stable functionality cannot be isolated into concentrated islands. If the set of self-perpetuating structures does exist then I envisage it to look something like a kind of thin spongy structure that stretches across configuration space (See picture at the head of this post).

It was the question of whether this spongy structure actually exists in configuration space which I posed in my configuration space series, a series whose last post can be picked up here. In the second part of this series I wrote:

Axiom 2 tells us that the set of living structures is tiny compared to the set of all possible non-self-perpetuating structures. This fact is an outcome of axiom 1 and the nature of disorder: If living structures occupy the mid regions between high order and high disorder then the logarithmic nature of the vertical axis on the LogZ-S graph will imply that disordered configurations are overwhelmingly more numerous. This raises the question of whether there are simply too few self-perpetuating structures to populate configuration space even with a very thin spongy structure; in fact the spongy structure may be so thin that although mathematically speaking we will have an in-principle reducible complexity, in terms of practical probabilities the structure is so tenuous that it may as well not exist!

Of course I have no analytical demonstration addressing this question and I doubt any one else has: How can we count the class of stable structures? For a start it very much depends on the complex environment those structures are in; in fact those structures are themselves part of that environment and so we have a convoluted non-linear feed-back relationship linking structure to environment. I suspect, therefore we are dealing with something here that is computationally irreducible.

But be all that as it may, my intuitions are that my question is answered in the negative: That is, the number of stable self-perpetuating structures is far too small to populate configuration space sufficiently to connect itself into a spongy structure that sprawls extensively across that space. If this conjecture is correct then it would make conventional evolution and OOL impossible. It was this feeling that resulted in my terminating the configuration space series and returning to early ideas that I started expressing publicly in my Melencolia I series, a series I continue to pursue.


Appendix
In this appendix I’ll assume that the class of stable self-perpetuating structures are grouped into a sponge like structure that stretches extensively across configuration space. If this is the case then we can see that evolution/OOL is less about differentials in fitness than it about survivability and stability. In the sponge model of configuration space evolution and OOL are envisaged as traversing the sponge with a kind of random walk based diffusional motion. However, it is possible that in some places the random walk is directionally biased, implying that certain parts of the sponge represent “fitter” structures than other parts of the sponge. In effect, then, a “fitness field” would pervade the structure of the sponge; this would mean that some areas of the sponge act like potential wells attracting the diffusion motion. However, it is equally possible that in other places there is no bias and the evolutionary/OOL diffusion is neutral; this is not to say that evolution is then without direction; as I have made clear before the sponge structure permeating configuration space acts as tramlines directing evolutionary diffusion. This latter point seems to be something that atheists don’t always understand. See the comments section of this post where I took up this question with atheist Chris Nedin. He appears to have absolutely  no inkling of just how profoundly directional his concept of evolution is required to be; frankly, he seems to be pulling the wool over his own eyes. See also: http://quantumnonlinearity.blogspot.co.uk/2015/01/the-road-to-somewhere.html


Footnote: Why I repudiate the de-facto IDists
* It follows then that an irreducibly complex set of structures is a highly disconnected set. This definition of irreducible complexity is different from the definition used by the de-facto Intelligent Design community, who once again seemed to have screwed up: See the link below about why their definition doesn’t work:
Let me get the following complaint off my chest: When I first came to this debate I had high hopes of the de-facto ID community. After all, they were in the main reasonable and moderate evangelicals like William Dembski. Moreover, I’ve always agreed with their premise that the information burden of our cosmos isn’t trivial and that this presents an enigma. However, I have become increasingly disillusioned with them: They have screwed up on various technical issues like irreducible complexity and the 2nd law of thermodynamics. Their so-called explanatory filter encourages dualistic God-of-Gaps theology. They are too uncritical of fundamentalism and in some cases harbour unreasonable religious fanatics in their midst. They also have a default well-right-of-centre political stance which I can’t support. They are right-wing and fundamentalist sympathizers and this seems to be bound up with their disdain of government funded academics, a group that the fundamentalists, for obvious reasons, also hate.

Thursday, July 17, 2014

North American IDists Screw Up Irreducible Complexity Definition


The North American ID movement’s concept of irreducibly complexity is badly formed; it defines irreducible complexity as the necessary juxtaposition of two or more parts in order for a function to work. But quoting a footnote to one of my blog posts :

Irreducible/reducible complexity: I don’t use these terms in the sense of Micheal Behe’s flawed concept of irreducible complexity. Irreducible complexity and reducible complexity as I conceive them are to do with how stable organic structures are laid out in configuration space. If a set of structures are reducibly complex they form a connected set in configuration space: This means that the diffusional computational process of evolution can bring about considerable change in organic structure. Irreducible complexity, on the other hand, is the opposite. That is, when such structures are widely separated in configuration space it is not possible for evolutionary diffusion to hop from one organism to another. Irreducible complexity, if defined properly (that is, not in the Behe sense), is an evolution stopper.

....the implication being that the Behe definition and that promoted by an ID site like "Uncommon Desent" is not an evolution stopper.
Further details on my view of irreducible complexity can be seen here:
The weakness of the North American concept of irreducible complexity becomes all too apparent in one of PZ Myers posts where he criticizes IDist Casey Luskin’s use of the concept. See here:

Saturday, April 26, 2014

Western Dualism in the North American Intelligent Design Community. Part 3


Professor Larry Moran, biochemist and evangelical atheist, refers to Intelligent Design Creationists as “IDiots”.  Given that I believe the story of cosmic configuration changes to be an act of intelligence, moreover a process of intelligence, then that probably makes me an IDiot too, a title I willingly embrace, and with good humour I hope. However, in this series of posts I'm anxious to distinguish my views from the largely right-wing North American “IDiots” with whom I have a hard time seeing eye to eye. In fact I’m trying to show that “IDiots” like V J Torley, who has a post on Uncommon Descent I am critiquing, promotes a very distinctly dualist God-of-the-Gaps type of IDiocy, an IDiocy that in the final analysis amounts to an attack on science: For Torley’s views very much depend on maintaining gaps in science’s description of nature so that these can then be plugged with inscrutable acts of God intelligence; that is, like other IDiots of his persuasion Torley is committed to a belief that life cannot be explained without invoking the activities of a tinkering tampering black box intelligence, an intelligence that makes good the assumed providential inadequacies of the physical regime. Such views harmonise well with a nature vs. God theological dualism. So, although I myself classify as an IDiot, my plea is that Torley is a much bigger IDiot than I am.

If Torley believed that the physical regime had generated life then his theology of the tinkering occasional God would fall over. So in order to scorn the idea that the cosmic algorithm suite just couldn't be providentially fruitful enough to generate life Torley resorts to this naive caricature:

The idea of writing a mathematical program that can generate a rich variety of meaningful stories from a “word bank” is comically absurd. Even a master programmer could not do that, unless he/she “cheated” and pre-specified the stories into the program itself. But that wouldn’t save any effort, would it? And one cannot even imagine a simple procedure for writing a good story. Stories are inherently complex, and their parts have to hang together in just the right way, or else they will not “flow” properly.
Someone might suggest that you could generate a very large number of stories by writing one master story and allowing parts of it to vary, like this:
“It was a bright cold day in April, and the clocks were striking thirteen.” (The opening line to George Orwell’s 1984.)
“It was a _____ , _____ day in _____ , and the clocks were striking _____ .”
To be sure, you could generate a very large number of stories that way, but reading them all would be a very monotonous enterprise: there wouldn't be any real variety. If the stories we generated could only vary within narrow constraints, like the gap-fill sentence above, then they would be very shallow and boring, and would feel “canned.”

My Comment: Comically absurd? Too right; the above certainly classifies as a piece of comedy: But as a serious proposition it fails on more than one level. Firstly it fails on the general level that the evolution of the universe (as opposed to the theory of evolution) is not a means of generating a story, but it is a story in its own right. At this general level Torley gives no cognizance to the recursive nature of Intelligence: Intelligence can tell a story, but the compiling and creation of that story is also a story, a story which a higher level intelligence could have conceived. The point is that Torley’s dualism, which induces him to impose a natural forces vs. intelligence dichotomy on the whole debate, simply cannot be enforced as a fundamental distinction. The recursive nature of intelligence means that intelligent activity can be contained within a higher level intelligence. The end result of intelligent activity may be a story, but the intellectual processes that generate that story is also a story. The subject of computation, which is closely related to the endeavours of intelligence, has a similarly recursive nature:  Programs can be written to write programs just as intelligence can conceive intelligence; The North American ID paradigm which enforces a sharp distinction between natural processes and intelligence is fundamentally flawed in failing to do justice to the reflexive character of sentience.

And here’s the second layer of failure in Torley’s ID: Notice that in his absurd example he uses assumed anthropomorphic values to explain why his scenario is unlikely; it’s comically absurd, a master programmer could not do that, unless he/she “cheated”, that wouldn't save any effort, would it? But in spite of these anthropomorphic allusions Torley has missed one very important anthropomorphic rationale for writing software – namely that of searching and finding. The scenario that Torley has sketched out tenders the idea of a programmer who has more or less conceived the solution from the outset and then tries to “solve it”. Of course, pre-specifying the end result destroys the whole point of a search! This preposterous notion presumably suites Torley down to the ground as he is anxious to maintain intelligence as an entirely distinct entity from the algorithmic processes that it could conceivably subsume.  In fact I would go as far as to say that “seeking and finding” is an important aspect of what intelligent activity is all about; searching and finding is intelligence in action. In such searches very general criteria may be laid down which dictate the conditions under which seeking eventually results in a positive find.

Torley’s flippant caricature betrays his point-of-no-return commitment to the notion of acts of intelligence as exclusively eminent and occasional events rather than seeing the cosmic process as immanent intelligence in action: If a programmer could run his search program in his mind then this might be a metaphor for what immanent intelligence means. If intelligent activity is to be recognized as intelligence at all then this very general structure  of seeking and finding will be part of its thought life. This action of searching and finding is the cosmic story that is being told. Mental life is a story of change and development.

It continues to get worse: Torley merely states that emergent life is impossible:

An organism has a story embedded in every cell of its body: its developmental program, which makes it what it is. Since it embodies a story, an organism cannot, even in principle, be produced by a single, simple act. And just as one story cannot be changed step-by-step into another while still remaining a coherent story, so too, it is impossible for one type of living thing to change into another as a result of a gradualistic step-by-step process, while remaining a viable organism.

My Comment: Yes, computationally speaking an organism could not be a single simple act: Clearly locating such structures in configuration space would require a very well-resourced search. But the assertion that an organism can’t be changed into another by gradualistic step-by-step processes can’t be argued either way from any known principled grounds (as opposed to evidential grounds). In contrast Torley is claiming here that he has principled grounds for rejecting gradualistic change as impossible. But this is just sheer assertion. True, it is possible that earthly organisms are actually irreducibly complex*2 to the extent that the step by step change envisaged by standard evolutionary theory is blocked; but in spite of his assertiveness Torley doesn't really know this for a fact: As intelligent beings ourselves we simply haven’t got the mental/computational where-with-all to search our way through all the possible ways the organisms of an organism suite can be altered to know whether or not gradualistic change is possible or impossible, let alone search all the possible physical regimes to establish if a reducibly complex*2 set of organisms in configuration space is mathematically impossible. What is motivating Torley is, of course, his anxiousness to maintain his preconceived God-of-the-Gaps theology which predisposes him to the a priori opinion that life is irreducibly complex*2 – entertaining the opposite opinion at the same time is out of the question, such is his intellectual commitment. Like other right-wing “IDiots” he has staked all on evolution not being a workable option. All his eggs are in one basket; the right-wing American ID basket.

I’ll concede that it may well be that the organisms of our physical regime are irreducibly complex*2; but even if this is the case Torley is still missing the fact that if life is a result if an intelligent search, then there will an inevitable underlying step-by-step gradualistic change as the search algorithms sift through the possibilities in a systematic and incremental way (Although this gradualism may not be reified in material terms).

Stories are not like mathematical formulas; and yet, undoubtedly they are still beautiful. They require a lot of work to produce. They are not simple, regular or symmetrical; they have to be specified in considerable detail. Who are we to deny God the privilege of producing life in this way, if He so wishes? The universe is governed by His conception of beauty, not ours, and if it contained nothing but mathematically elegant forms, it would be a boring, sterile place indeed. Crystals are pretty; but life is much richer and more interesting than any crystal. Life cannot be generated with the aid of a few simple rules. It needs to be planned and designed very carefully, in a very “hands-on” fashion. In order to facilitate this, God needs a universe which is ontologically “open” to manipulation by Him whenever He sees fit, rather than a closed, autonomous universe
The beauty found in living things, then, cannot be defined as a balance between plenitude and economy (to use Leibniz’s terms), or (as Hogarth would have put it) between variety and underlying simplicity. It is a different kind of beauty, like that of a story. That is why life needs to be intelligently designed.

My Comment: This really betrays Torley’s limited concept of the role of mathematical formula – he thinks of them not as search constraints but as recipes that generate preconceived solutions. This is what I refer to as the “dynamic fallacy” or the "front loading fallacy" that I have seen before amongst the North American ID community: For example, they often think it is necessary to counter the idea that the contingent configurations of the DNA coding are chemically preferred by the laws of physics. Of course, no bias toward the DNA coding has been found in physics and chemistry, and so the North American ID community sees this as evidence for their God-Of-The-Gaps theology. But what they neglect to consider is that the physical regime is part of an algorithm suite which has the effect of focusing the contingent possibilities into a narrow band in order to raise the probability of locating living configurations. This largely right-wing ID community haven’t rumbled the distinction between a physical regime as a constraint defining a search space in a declarative programming paradigm and the more familiar concept of a physical regime conceived as a procedural programming paradigm that determines the course of action in advance, in strict sequence. (See here for more on this subject; this is, in fact, a topic I am still working on and hope to publish here)

It is conceivable, I suppose, that cosmic history could be a kind of decompression operation which generates life in some kind of linear time procedural process; this is an extreme kind of "front loading" where Torley’s objections about the generation of pre-conceived solutions applies. But herein lies the rub: In such a case the authentic heavy duty computational problem has been pre-solved and the result simply “compressed” (or "front loaded") ready for the uncovering of what is effectively already there. Instead I think we need to start thinking in terms of the cosmic process being less a linear time decompression, but rather the actual proactive solution of a problem by searching, a problem whose solution has not been "front loaded" but in fact may still be in the throes of being “solved”. Given the problem is likely to have an exponential intractability it will need to be resourced by some kind of expanding parallelism (such as we see in Quantum Mechanics, perhaps). This is a far cry from the kind of linear time serial computation that is one of the straw men of North American ID.

Footnotes
*1 In spite of my disagreements with Uncommon Descenters I get the feeling that they are in a different league to the far less accommodating class of hardened “hell and hamnation” sectarian heretic hunters we find amongst the US religious right. (and, to be fair to some extent in the UK)

* 2 Irreducible/reducible complexity: I don’t use these terms in the sense of Micheal Behe’s flawed concept of irreducible complexity. Irreducible complexity and reducible complexity as I conceive them are to do with how stable organic structures are laid out in configuration space. If a set of structures are reducibly complex they form a connected set in configuration space: This means that the diffusional computational process of evolution can bring about considerable change in organic structure. Irreducible complexity, on the other hand, is the opposite. That is, when such structures are widely separated in configuration space it is not possible for evolutionary diffusion to hop from one organism to another. Irreducible complexity, if defined properly (that is, not in the Behe sense), is an evolution stopper.

Tuesday, April 02, 2013

Once More into the False Dichotomy Zone: "Naturalism vs. Design".



(Picture from http://www.faradayschools.com/re-topics/re-year-10-11/god-of-the-gaps/ )

Established evolutionary theory may be a good theory, but I wouldn't say I'm 100% convinced and I’ll continue to read with interest the views of the de-facto Intelligent Design community.  But in spite that I certainly don’t share what appears to the de-facto IDist’s well-motivated anti-evolutionary complex. On occasion this underlying complex manifests itself in a strenuous drive to find in principle refutations that short cut the work of debunking evolution. An example of this is Granville Sewell who is thoroughly beguiled by a belief that the 2nd law of thermodynamics provides in principle refutation of evolution.

Before I go any further with a critical look at a particular post on the IDist web site Uncommon Descent let me make it clear that I at least agree with the de-facto IDists on this: One can’t do natural history without assuming as a starting point a world with an initially high burden of information. That is, our world could not be as it is without being resourced by some very rare conditions, whether of actual up and running configurations or the appropriate algorithmic generators of those configurations. If from either of these starting points we use the principle of equal a-priori probabilities to convert rarity of case into high improbability, we infer that our cosmos contains an irreducible burden of information; in a nutshell this is William Dembski’s result. If one is so inclined this inevitable logical hiatus readily takes us into theology, but I won’t touch that subject here. Suffice to say that I agree with Dembski’s core thesis that the cosmos’s burden of information is irreducible. In fact it even applies to multiverse scenarios.

The recent Uncommon Descent post I am considering here, however, deals with less fundamental questions and serves to highlight where I would be depart from UD thinking.  Below I quote this post in its entirety and interleave it with my own commentary.

March 23, 2013           Posted by kairosfocus under Intelligent Design:-        
EA nails it in a response to an insightful remark by KN (and one by Box): “the ability of a medium to store information is inversely proportional to the self-ordering tendency of the medium”
6 Comments
Here at UD, comment exchanges can be very enlightening. In this case, in the recent Quote of the Day thread, two of the best commenters at UD — and yes, I count KN as one of the best, never mind that we often differ — have gone at it (and, Box, your own thoughts — e.g. here — were quite good too  ).

My Comment: The reason why Kairosfocus favours the two commenters he is about to quote is because he has very much backed the law and disorder vs. design dichotomy and he is quite sure that law and disorder have not generated life. He’s of the school of thought that once you have eliminated law and disorder from the enquiry that leaves intelligent design as the explanation for living structures. The trouble with this view is that it can give the false impression that law and disorder vs. design are mutually excluding categories. I have expressed doubts about this dichotomy on this blog several times. However, to be fair to Kairosfocus I’m sure he understands that if our cosmic law and disorder regime has generated living configurations then there still remains the irrefutable work of Dembski, work that, as Dembski admits, doesn’t in and of itself contradict evolution. 

But if Kairosfocus is right and the cosmic law and disorder regime is inadequate to generate life then this means that “design theory” becomes a very accessible and compelling argument; it is easy to picture some kind of homunculus molecular designer piecing together the configurations of life much like a human engineer. The OOL/evolutionary alternative requires one to grasp some rather difficult to understand notions employing information theory, fundamental physics and algorithmics.

Anyway, continuing with the UD post:

Let’s lead with Box:
Box, 49: [KN,] your deep and important question *how do parts become integrated wholes?* need to be answered. And when the parts are excluded from the answer, we are forced to except the reality of a ‘form’ that is not a part and that does account for the integration of the parts. And indeed, if DNA, proteins or any other part of the cell are excluded from the answer, than this phenomenon is non-material.


My Comment: This, I think, is an allusion to one of the de-facto ID community’s better ideas; namely irreducible complexity. In non-mathematical terms irreducible complexity can be expressed as follows: Organic components can only exist if they are part of an organic whole that maintains their existence. But conversely the survival of the organic whole is dependent on the individual components surviving. In other words we have mutual dependence between the parts and the whole. So, since organic wholes depend on parts and parts depend on organic wholes it appears that this mutual dependence prevents an evolutionary piecemeal assembly of an organism from its parts. The conclusion is that each organic form came into existence as a fait accompli.  However, this logic has a loop hole that evolutionists can exploit. The kind of incremental changes that can be conceived are not stuck at the discrete level of mutually dependent parts. It hardly needs to be said that organic components are composed of much more elementary components than organic parts, namely fundamental particles. Therefore the question naturally arises as to whether the organic parts themselves can be incrementally morphed at the particulate level and yet still leave us with a viable stable organic whole. This, of course, takes us into the fundamental question of whether configurations space with respect to these incremental changes is reducibly complex, a concept defined in the post here. But as I mention in that latter post there is an issue with reducible complexity: Given that the number of viable organisms is likely to be an all but negligible fraction of all the total possible configurations of atomic parts, it is certainly not obvious to me that a practical reducible complexity is a feature of our physical regime. But conversely I can’t prove that it isn’t a feature!

The point I am making here is that because the UD comments above remain at the discrete “part” level rather than the more fundamental particulate level they don’t scratch the surface of the deep theoretical vistas opened up by the reducible complexity question. But there is, I’ll concede, a prima facie case for the de-facto ID community’s skepticism of evolution, a case that particularly revolves round the idea of irreducible complexity; although this skepticism appears to be motivated by a narrowness of perspective, namely, the perspective that “Design” and “Naturalism” so called (i.e. OOL and evolution) are at odds with one another.

Now, it may well be that evolutionary theory as the scientific establishment conceives it is wrong, perhaps because irreducibility complexity blocks the incremental changes evolutionary theory demands. But one feels that if evidence came to light that unequivocally contradicted the defacto-ID community’s anti-evolutionism (if such is possible) it would mean a very drastic revision of their “design vs. nature” paradigm. The kind of argument above regarding the apparently all-or-nothing existence of organic structures, although in some ways compelling, is certainly not absolutely obliging. The UD argument I have quoted regarding the holistic nature of organisms does not classify as a killer “in principle” argument against evolution. The de-facto ID community is very enamored of the metaphor of the intelligent homunculus who works like a human engineer in contradistinction to the so-called “naturalistic” evolutionary mechanisms. But there is a great irony here: If physical regimes implying reducible complexity have a mathematical existence then the computational resources needed to find and implement such a regime could be put down to an intelligent agent. Ironically then, using the very principles the de-facto ID community espouse, a workable evolution can hardly be classified as “natural” but rather very “unnatural” and moreover evidence of a designer! If the de-facto IDists are prepared to espouse an all but divine designer, such a designer could be the very means of solving the problems of selecting a physical regime where OOL and evolution work!

KN, 52:  the right question to ask, in my estimation, is, “are there self-organizing processes in nature?” For if there aren’t, or if there are, but they can’t account for life, then design theory looks like the only game in town. But, if there are self-organizing processes that could (probably) account for life, then there’s a genuine tertium quid between the Epicurean conjunct of chance and necessity and the Platonic insistence on design-from-above.

My Comment: Self-organization, so-called, is not of necessity a tertium quid; it could yet be the outcome of a carefully selected Law and Disorder dynamic. In fact if evolution and the necessary OOL processes that must go with it are sufficient to generate at least an elementary form of life this would classify as “self-organization”. Richard Johns, who is an IDist, would agree on this point. In a published paper Johns probes the subject of self-organization using a cellular automata model. Cellular automata are based on a law and disorder paradigm and make use of no tertium quid. Of course, as a de-facto IDist Johns is somewhat committed to the notion that this form of self-organization cannot generate life, but his paper does not succeed in proving the case either way. In fact in order to support his prior commitment to the inadequacy of self-organization he hamstrings law and disorder as a means of self-organization with a habitual mode of thinking that has become fixed in people’s mind ever since Richard Dawkins coined the phrase “The Blind Watch Maker”. In Johns’ case he applies the general idea behind the Blind Watch Maker by taking it for granted that the law and disorder algorithms controlling his cellular system are selected blindly. Since it is a likely conjecture that life generating law and disorder systems are extremely rare cases amongst the class of all possible algorithmic systems (if indeed they have mathematical existence at all) then clearly blind selection of the cellular algorithms is unlikely to give us a system that generates living configurations! But if Johns believes in an omni-intelligent agent of open ended powers then that agent could just as well express itself through the selection of just the right life generating regime (assuming it has a mathematical existence) as contrive living configurations directly.  Given the ID culture Johns has identified with, he is likely to think of self-organization as a “naturalistic” method of generating life and so he hamstrings this notion by simply not allowing it to be set up via intelligent agency. Of course, if you disallow intelligence to express itself in this way and insist on the selection of physical regimes on a blind random basis then you are not likely end up with a life generator!

Notice that in the quote from KN he too is inclined to see self-organization and design theory as two competing scenarios whereby elimination of one leaves the other as the “only game in town”.   In fact self-organization is mysterious enough to KN that it classifies as neither law-disorder nor design, but a tertium quid. The naturalism vs. intelligence dichotomy is so fixed in his mind that it has never occurred to him that self-organization of the law and disorder variety leaves us with similar issues of logical haitus and computational complexity as does the idea that living configurations are simply a fait accompli.  He just doesn’t make a connection between the large measure of computational complexity implicit in the selection of the right physical algorithms and a design decision! I see this as yet another manifestation of the false dichotomy of God did it vs. Naturalism did it.

Self-organization is, in fact, a very bad term. The elementary parts of the cosmos could never self-organize but only do so because an imposed and carefully selected physical regime controls them. The term “self” is yet another subliminal signal of the “naturalistic” view that somehow the elementary parts of the cosmos possess some power of organization in and of themselves. But think about it: That’s not unlike claiming that the bits in say a Mandelbrot set have the innate power to organize themselves into intricate patterns!

EA, 61: . . .  the evidence clearly shows that there are not self-organizing processes in nature that can account for life.
This is particularly evident when we look at an information-rich medium like DNA. As to self-organization of something like DNA, it is critical to keep in mind that the ability of a medium to store information is inversely proportional to the self-ordering tendency of the medium. By definition, therefore, you simply cannot have a self-ordering molecule like DNA that also stores large amounts of information.
The only game left, as you say, is design.
Unless, of course, we want to appeal to blind chance . . .

My Comment:  EA is probably right about there being no evidence for self-organization; but only as an extra tertium quid factor. There is of course evidence for evolution as a form of self-organization arising from a cellular automata system, but just how obliging this evidence is and just how successfully the theory joins the dots of the data samples is what the debate is about!

 EA’s point about the conflict between information storage and self-organization is I think this: Self organization, at least as it is conceived by Richard Johns and myself, is a highly constrained process; though it may generate complex forms it nevertheless has low redundancy in as much as it is not possible to arbitrarily twiddle the bits of a self-organized configuration without the likelihood of violating the algorithmic rules of this process. In contrast arbitrary information storage allows, by definition, arbitrary bit twiddling and therefore one can’t use a self-organized system to store any old information. Self-organization only stores the information relevant to the form it expresses. For example I couldn’t arbitrarily twiddle the bits of a Mandelbrot set without violating the rules of the algorithm that generated it.

However, I believe EA has misapplied this lesson with some hand waving. If OOL and evolution have generated life using the algorithms of a cellular system it would classify as self-organization (albeit with “self” being a complete misnomer). OOL and evolution would work by virtue of the selection of what is likely to be a very rare algorithmic case and this rarity would imply a corresponding high level of information. Self-organized systems are algorithmic ways of storing the information found in the complex patterns they generate. Ergo, EA’s point about self-ordering systems and their lack of ability to store information is misleading; true they can’t store information about systems other than the forms they define, but they nevertheless do store information of a special kind.  What I think EA really means is that self-ordering systems can’t store arbitrary information.

The type of “think” that EA displays here is reminiscent of  an argument I once saw on UD (although I’ve lost the exact chapter and verse). It went along these lines: Self-organization requires “necessity”. Necessity implies a probability of 1 which in turn implies an information of zero. Therefore self-organization can’t store information. This argument is false and appears to  be based on the misleading connotations of the word “necessity”. What these IDists refer to as “necessity” is simply algorithmic constraint. Since the set of all algorithmic constraints is very large then the selection of a particular suite of constraining algorithms is highly contingent and is hardly a “necessity”. Conversely, a book of random numbers to the observer who first comes to it is very "contingent" and thus stores lots of "information". However, once the observer has used the book and committed it to memory, it's information value is lost. "Information" is observer dependent. In fact depending on the state of the observer's knowledge so-called "necessity" can be packed with information whereas so-called "contingency"  may have a zero information content.

EA, in thinking that he has chased self-organization out of the town, invokes the habit of mind which automatically separates out self-organization and design as two very distinct processes. He consequently concludes that design is the only game left in town. EA expresses no cognizance of the fact that, using William Dembski’s principles, he has also chased away what itself could classify as a form of design: For high improbability is also likely to found in the selection of the rare algorithmic cases needed to make self-organization work.

Kairosfocus finishes with this:

So — noting that self-ordering is a species of mechanical necessity and thus leads to low contingency — we see the significance of the trichotomy necessity, chance, design, and where it points in light of the evidence in hand regarding FSCO/I in DNA etc. END

My Comment: This statement identifies mechanical necessity with low contingency; I think that’s intended to suggest that mechanical necessity cannot be the information bearer required for life; a conclusion that as far as I’m concerned may or may not be true. 


***
Let me stress that I have no vested interest in evolution as a theory and will continue to follow the views of the de-facto IDists with great interest. But I certainly would not argue against evolution along the above lines. 

Tuesday, March 19, 2013

Config Space via Mathematical Impressionism. Part 2

This series is intended to provide a very general conceptual frame work for thinking about evolution. In the first part I introduced the following graphical representation of configuration space:



The horizontal axis represents the size of a configuration. The vertical axis is the logarithm of the total number of logically conceivable configurations consistent with a configuration size of value S. For a given size S, each possible configuration is counted by mapping it to a point on the Log Z-S plane. In order to organize this count of points the area under L0 has been divided up into wedge shaped bands using lines L1 to Ln. If we take a given size S, then the vertical distance across a band is the Log measure of the number of configurations that have a particular disorder value, where disorder increases from bands 1 to n respectively.

Arranging configurations of a particular value of disorder and size into a 1 dimensional line doesn't do justice to the multidimensional nature of configuration space, a fact I alluded to in the first part of this series. Mapping configurations of a particular disorder and size onto a single vertical line in one of the wedges above will have the effect of forcing a separation on otherwise natural near neighbors in configuration space. In fact this is similar to the effect that occurs when one maps a multidimensional space onto linear computer memory; neighbouring points get separated. 

In spite of the limitations of my graphical representation we can nevertheless use it to help talk about the conditions needed for evolution to occur.

In the first part I defined living structures as configurations with powers of self-perpetuation - a process that includes self-repair and reproduction. Therefore the sort of self-perpetuation I'm thinking of is very proactive in that it is not simply down to atomic bonding stability (as it is for strong crystalline structures), but instead a form of maintenance that depends on a blend of proactive repair and reproduction; in fact in terms of molecular bonds living materials are by and large very fragile.

One of the fairly obvious requirements of evolution as conventionally understood is that of “reducible complexity” (I have talked about this point many times in this blog).  Given axioms 3 and 4 (Seen part 1), conventional evolution requires that living configurations, when mapped to configuration space, give rise a set of points in this space that are close enough to one-another to form a completely connected region; very likely this region would be the multidimensional equivalent of a “spongy structure” made up of extremely thin membrane walls.  This connectedness will mean that the random agitations of evolutionary gradualism can set up a diffusional migration across configuration space without resort to highly improbable saltational leaps. It is this connected structure that defines what “reducible complexity” means. It also explains why so many in the de-facto “Intelligent Design” community are quite sure that living structures are “Irreducibly Complex” rather than “reducibly complex”. A class of structures is irreducibly complex if they form a scattered set in configuration space - that is, they do not form a connected set but are by and large individually isolated.  If self-perpetuating structures are arranged as an irreducibly complex set in configuration space then this means these structures can only be reached by saltational leaps. The de-facto ID community then contend, (with some plausibility), that if this is the case then the only agent we know capable of literally engineering these leaps is intelligence.

To be fair to the ID community, the notion that organic structures form a reducibly complex set is moot on at least three counts

ONE) If a reducibly complex set of self-perpetuating structures exists then it is likely to be highly sensitive to the selected physical regime. I suspect, although I have no proof, that the physical regimes implying reducible complexity is a very small class indeed; I guess that any old selected physical regime won’t do. But even if physical regimes that favour reducible complexity have at least a mathematical existence we are still left with the question of whether our particular physical regime is one of them!

TWO) Axiom 2 tells us that the set of living structures is tiny compared to the set of all possible non-self-perpetuating structures. This fact is an outcome of axiom 1 and the nature of disorder: If living structures occupy the mid regions between high order and high disorder then the logarithmic nature of the vertical axis on the LogZ-S graph will imply that disordered configurations are overwhelmingly more numerous. This raises the question of whether there are simply too few self-perpetuating structures to populate configuration space even with a very thin spongy structure; in fact the spongy structure may be so thin that although mathematically speaking we will have an in-principle reducible complexity, in terms of practical probabilities the structure is so tenuous that it may as well not exist!

THREE) My definition of life in terms of self-repair and reproduction would seem to imply a threshold of sophistication of configuration that is relatively high. Even if this set of structures form a completely connected set in configuration space how did the first structures come about? Their sophistication would seem to demand a size that is too large  to have come about spontaneously (see Axioms 2 and 3). Therefore if evolution is to work our reducibly complex set of structures must be continuously connected to and blend with a set of small stable structures toward the lower size end of our graph where small configuration sizes mean that the probability of spontaneous appearance is relatively high. (An implication of axiom 2). This is the subject of the Origins of Life (OOL) which as far as I’m aware doesn't have any substantive scenarios on the table.

I must express (again) my feeling that solutions to the above questions are not likely to be succinctly analytical, because I suspect that attempts to solve them analytically will hit Wolfram’s computational irreducibility barrier. That is, that the only way of probing these questions is to do a full simulation,  because there may be no other shorter way of computing the result than working, event by event, through the full natural history of the world. But perhaps I'm being too pessimistic!

***

The de-facto “ID” community, in my opinion, are not getting the respect and hearing they deserve. After all, the big issues I've outlined above don’t have obvious answers. Nevertheless, as I have expressed many times before, I continue to feel uneasy about the de-facto “ID” community’s ulterior philosophy and underlying motivation.  This uneasiness stems from: a) Their failure to register that even bog-standard evolutionary theory presupposes highly computational complex pre-condition;, that is high information conditions (Which is essentially the lesson from their very own William Dembski. b) That many de-facto IDists still see the subject through the fallacious God did it vs. Naturalism did it dichotomy.  This dichotomy is seductive to both theists and atheists. The polarized and acrimonious state of the debate in North America, where it is cast in the mould of a “Masculine God vs. Mother Nature” paradigm, has probably help keep this dichotomy alive. In this context the natural history question is framed entirely in terms of whether it is guided or unguided - guided by a driving masculine homunculus or left unguided by a scatty mother nature. So, in my next part I will look into the subject of whether evolution, as it is conventionally conceived,  has direction. 

North American Paradigm: Mother Nature or Guiding Homunculus?

Finally I must add this caveat: Although I eschew the North American paradigm that swings so much on the question of whether natural history is "guided or unguided”, this is not to say that the established picture of evolution is correct. As I have said before the game of chess is considerably constrained by its rules, but if you try moving chess pieces about at random even under the constraint of those rules you are unlikely to end up with a sensible game. Physics, as we currently understand it, may not be strong enough constraint to imply a computation that follows the established evolutionary paradigm. In later parts of this series I may probe whether there are ways round the problems outlined above.

End Note: Clearly living structures capable of self-perpetuation in one environment may not survive in another. Therefore the way self-perpetuating structures are arranged in the Log Z-S plane will depend on the environment we are looking at.  Moreover, since the presence of organic structures are  part and parcel with the environment this will introduce non-linear feedback effects. For simplicities sake these non-linear issues have been left out of the above discussion.