Pages

Friday, January 22, 2016

Necessary Conditions for Evolution: The Spongeam

Conventional evolution is conceived as an incremental process of change. That has implications for the layout of self-perpetuating, self- replicating structures in configuration space.

The following are some notes on the object I refer to as the spongeam – this is a structure in configuration space that is a necessary (but not sufficient) condition for conventional evolution. The spongeam is a logical implication of evolution given  evolution as it is currently understood.

Evolution is sometimes wrongly caricatured as an unguided chance process.  See for example the following statement by IDist “niwrad” on the website Uncommon Descent. In his UD post niwrad expresses his dismay at theologian Alastair McGrath’s acceptance of “theistic evolution”. In this quote I have added emphases where I believe niwrad errs:

Unguided evolution (whatever meaning we give it, and all the more so if we give it a meaning based on Darwinian evolution, which is what theistic evolutionists and McGrath mean) is a theory based on chance, on randomness, i.e. accidents. If “the universe is not an accident” — as he [McGrath] rightly believes — how can evolution, an engine of accidents, be an explanation of “how the world started”, with the same plausibility of creationism and ID?

“Unguided evolution…a theory based on chance… an engine of accidents” are all phrases that thoroughly misrepresent the true situation with evolutionary theory as it stands today. The answer to niwrad’s last question has been provided many times on this blog: In short, conventional evolution is not unguided but is in fact a random walk process which takes place within a tight envelope of constraint. Therefore McGrath’s apparent belief in theistic evolution is consistent with a belief that the universe is not an accident. Let me expand….

Even though I myself have reservation about evolution (as currently understood) it is certainly not a purely chance process. It is an ironic that the very reason why I can’t easily dismiss evolution on the grounds of entropy is precisely because it is not an unguided process; it is, in fact, a highly channelled form of entropy: For just as life annexes and organises increasing amounts of matter as it populates the world under the constraint of the information implicit in its machinery and yet still keeps within the second law of thermodynamics, so too would evolution. Conventional evolution can only work if “randomness” plays out within a very small probabilistic envelope. (It is arguable – albeit only arguable – that the high information implicit in this envelope is implicit physics)  It is ironic that not many in the de facto ID community understand the conclusion of one of their very own gurus, William Dembski, whose “Conservation of Information”, if applied to the evolution of life, can be written as:

Probability of life given our physical regime,  P(Life | Physical Regime)  = p/q

Where:

p is the a priori probability of living configurations; that is, the probability of selecting a member from the class of living configurations assuming the principle of indifference over the space of all possible configurations. The organised complexity of living things clearly implies that p is going to be extremely small.

q is the probability of selecting a physical regime which enhances the chances of selecting life assuming the principle of indifference over the space of cases consistent with this physical regime.

See this paper for a “proof” of the above relation.

To give life a realistic probability of coming about the value of q, which is a measure of the tightness of the constricting envelope, must be very small. It is this constricting envelope which I identify as an object I call the spongeam. In the points below I develop the concept of the spongeam stage by stage on the assumption that standard ideas of evolution apply:

  1. We know that there is a subset of configurations which are stable enough to self-perpetuate and self-replicate (I refer to these self-perpetuating self-replicating structures or SPSRSs). One fairly compelling corollary of this definition of SPSRSs is that they must be highly organised to carry out their tasks and therefore constitute a very tiny set of possibility when compared to the overwhelming size of configuration space consistent with our physical regime.
  1. An important question is this: How does the set of SPSRSs populate configuration space? What does this pattern look like when viewed across configuration space?
  1. If conventional evolution has occurred then certain conditions must be true of this set… Viz: Because conventional evolution is stepping through configuration space in small steps it follows that for an SPSRS set to be favourable to evolution, then that set must be fully connected in as much as any one member must be no further than some typically maximum distance from at least one other member of the set; let’s call that distance s.
  1. If conventional evolution has been actualised in paleontological history it follows that H steps of typical step size s separates us from the first ancestors. That is, there must be at least one unbroken path between us and those antecedent structures.
  1. But even if this connected set exists there is another necessary condition of evolution: If we select any particular SPSRS and then amend it in m steps of typical size b, always ensuring that each step moves away from the selected SPSRS, then the high order of SPSRS implies that most paths will lead to non-viable structures – that is, structures that are not SPSRS. Thus, we have a second necessary condition for evolution: Viz: compared to the number of outgoing paths which lead to non-viability there must be a sufficient number of paths connecting an SPSRS to the rest of the set for it to have a realistic chance of stepping to another slightly different SPSRS.  Therefore, the set of SPRSs must not only be connected but the connections must be sufficiently strong to give a realistic chance of random evolutionary stepping across the spongeam. 
  1. There is one aspect of this matter which I consistently miss out for simplicity’s sake; this is the fact that populations of SPSRSs become their own environment and therefore there is a feedback relation between SPSRS’s and their environment. This feedback is likely to be non-linear and therefore gives rise to chaos, thus compounding the difficulties besetting human efforts to get an analytical handle on the spongeam.
Ignoring point 6, then the foregoing creates in my mind’s eye a kind of static spongey looking structure in configuration space; that is, a network linking the set of SPSRSs into a connected set. Viz:

If the Spongeam exists it might be the multidimensioanl equivalent of this 3d sponge. 

If you imagine this structure to be far more attenuated than is shown here, then the result would be an approximation to what I envisage to be the kind of structure in configuration space which is a necessary condition of evolution. The next question is; how does movement take place from one point to another in the spongeam thereby giving us an evolutionary dynamic? The answer to that question (and the answer I’ve assumed above) is that in standard evolution the motion is one of random walk – that is, diffusional. But random walk or not, it is clear that the tight constraint which the spongeam puts on the random motions gives the lie niwrad’s claim that evolution is unguided. (See also atheist Chris Nedin who, ironically, is also loath to admit that evolution is an inevitably guided process. The ulterior motive here may be an unwillingness to accept that any workable model of evolution, as William Dembski has shown, must be tapping into some source of information).

The spongeam is more fundamental than the fitness surface. The fitness value is kind of “field gradient” that permeates the network of the spongeam and biases the random walk in places, but not necessarily everywhere (See here for an example of an equation for biased random walk).

The above points, 1 to 6, are an annunciation of the very general mathematical conditions which are logically necessary for conventional evolutionary processes to take place. But I must voice my standard disclaimer here: My own bet is that the minuscule set of viable self-perpetuating replicators is unable to populate configuration space with a sufficient density to form a connected set which facilitates standard evolution. Doubts on this point are the main reason why I have branched out into my speculative Melencolia I series.

What is the evidence for the spongeam? Since the spongeam is a necessary condition for conventional evolution the evidence for it is as strong (or weak) as the evidence for evolution. As I’m not a biologist I try to avoid commenting on the strength or weaknesses of the observations for evolution. But these observations are nevertheless important:  In view of the analytical intractability of the spongeam the human alternative to analytically proving or disproving the existence of the speoneam from first principles can only be found in observation.

One important point I would like to make here is that the existence of the spongeam would mean that evolution works by using front loaded imperative information rather than the backloaded information of a declarative computation. I explain the difference between frontloading and backloading in this paper.

Atheist Joe Felsenstein clearly does not accept evolution to be an “unguided engine of accidents” and understands that it must be a front loaded process; this is evidenced in the posts I did on his comments about the work of IDists Dembski, Ewart and Marks (See here and here). Although Felsenstein may not think in terms of the spongeam he has nevertheless shown it to be arguable that conventional physics is responsible for the depository of information inherent in fitness surfaces. In the comments sections of my posts I have just linked to he writes:

If the laws of physics are what is responsible for fitness surfaces having "a huge amount of information" and being "very rare object[s]" then Dembski has not proven a need for Intelligent Design to be involved. I have not of course proven that ordinary physics and chemisty is responsible for the details of life -- the point is that Dembski has not proven that they aren't.

Biologists want to know whether normal evolutionary processes account for the adaptations we see in life. If they are told that our universe's laws of physics are special, biologists will probably decide to leave that debate to cosmologists, or maybe to theologians.

The issue for biologists is whether the physical laws we know, here, in our universe, account for evolution. The issue of where those laws came from is irrelevant to that.

Given that Dembski and colleagues appear to be working with an ancillary non-immanent concept of intelligence, Joe Felsenstein’s comments above are true:  Dembski and co, using their explanatory filter epistemic, work within an Intelligence vs natural forces conceptual framework. This frame work is valid for ancillary intelligences such as humans or aliens. Therefore should Joe Felsenstein plausibly demonstrate that imperative information in the laws of physics is efficacious in the generation of life the assumed position of many IDists becomes problematical. These IDists have implicitly committed themselves to the view that the imperative information in the laws of physics is classed as “natural” and is therefore incapable of generating life. The motive behind this position is that if physics should be unable to account for life (as they believe) they can then invoke their explanatory filter to argue that life is the work of an ancillary intelligence, or if you like an intelligence of the gaps. However, for Felsenstein the imperative information in physics is more likely to be due to some unknown physics of the gaps.

It should be noted here that some IDists believe the imperative information needed to generate life is provided in punctuated interventional installments rather than in one grand slam act of Divine fiat or by the one-off creation of the grand narrative of physics. See for example the posts here and here.

Tuesday, January 12, 2016

Melencolia I Part 7: Creating Information II

Part 7 of Melencolia I can be downloaded from here.  I reproduce the introduction to this paper below:

 Introduction

The de facto Intelligent Design community make claim to the notion that information is conserved, at least as far as so called “natural processes” are concerned. In particular, one of the founding gurus of the movement, William Dembski, has stated the conservation of information in mathematical terms. The aim of this paper is to investigate this claim and its limitations.

What is information? There is more than one valid answer to that question, but this paper will be looking at the kind of information defined as the negative of the logarithm of a probability p; that is, -log p .  This is the definition of self-information used by William Dembski. The obvious corollary here is that self-information increases with increasing improbability. The rationale behind this definition is that the lower the probability of an event then the more unexpected it is and therefore the more informative it becomes if it should occur; information is data you don’t expect, don’t anticipate and don’t know to be true; that is, you learn from it when it manifests itself to you and makes itself known.

As an example, consider a configurational segment of length n taken from a series of coin tosses. Given n it follows that the number of possible configurations of heads and tails is 2n.  If we assume that each of these possible configurations is equally probable then any single configuration will have a probability of 2–n. For large n this is going to be a very small value.  In this case our knowledge of which configuration the coin tossing will generate is at minimum; all possibilities from amongst a huge class of possibilities are equally likely. Consequently, when the coin tossing takes place and we learn which configuration has appeared it is highly informative because it is just one amongst 2n equally likely possibilities. The measure of just how informative that configuration is, is quantified by I, where:

I = -log 2-n = n log 2
(0.0)
Conveniently this means that the length of a sequence of coin tosses is proportional to the amount of information it contains.

Information, as the very term implies, is bound up with observer knowledge:  When calculating probabilities William Dembski uses the principle of indifference across mutually excluding outcomes; that is, when there is no information available which leads us to think one outcome is more likely than another then we posit a priori that the probabilities of the possible outcomes are equal. I’m inclined to follow Dembski in this practice because I hold the view that probability is a measure of observer information about ratios of possibilities. In my paper on probability I defined probability recursively as follows:


Probability of case C   =  Sum of the probabilities of cases favoring C /  Sum of probabilities of all cases

(0.1)

This definition is deliberately circular or, if you want the technical term, recursive. For a recursively defined probability evaluating it depends on the recursion terminating at some point. Termination will, however, come about if we can reach a point where the principle of indifference applies and all the cases are equally probable; when this is true the unknown cancels out on the right hand side of (0.1) and the probability on the left hand side can then be calculated.  From this calculation it is clear that probability is a measure of human information about a system in terms of the ratio of possibilities open to it given the state of human knowledge of the system. This means probabilities are ultimately evaluated a priori in as much as they trace back to an evaluation of human knowledge about a system; the system in and of itself doesn’t possess those probabilities.

It follows then that once an improbable event has occurred and is known to have occurred, the self-information of that event is lost because it no longer has the power to inform the observer; the probability of a known event is unity and therefore of zero information. But for an observer who has yet to learn of the event, whether the event has actually happened of not, the information is still “out there”. Information content, then, is observer relative.

Observer relativity means that self-information is not an intrinsic property of a physical system but rather an extrinsic property. That is, it is a property that comes about through the relation the system has with an observer and that relation is to do with how much the observer knows about the system. Therefore a physical system loses its information as the observer learns about it, and yet at the same time there is no physical change in that system as it loses this information; where then has the information gone? Does it now reside in the observers head?  But for another observer who is still learning about the system that information, apparently, remains “out there, in the system”.

Given the relativity of self-information, then treating it as if it were an intrinsic property of a physical system can be misleading. The observer relativity of self-information makes it a very slippery concept: As an extrinsic property that relates observers to the observed, a system can at once both possess information and yet at the same time not possess it!

This observer information based concept of probability is very relevant to the subject in hand: that is, to the cosmic generation of life. Given that the configurations of life fall in the complex ordered category it follows that as a class life is a very rare case relative to the space of all possible configurations. So, assuming the principle of indifference over the total class of possible configurations we would not expect living configurations to exist; this is because their a priori probability must be extremely small and therefore their self-information or surprisal value is very high: Living configurations are very special and surprising configurations.

But the fact is life does exist and therefore the a posteriori probability of instantiated life is unity. It is this intuitive contradiction between the a priori probability and the a posteriori probability of life that constitutes one of the biggest of all scientific enigmas.  In this paper I will attempt to disentangle some of the knots that the use of self-information introduces when an attempt is made to use it to formulate a conservation law. I also hope to throw some light on the a priori/a posteriori enigma of life.