Wednesday, January 05, 2011

Beyond Our Ken: On Mature Creation. Part 2

And God created the world "Just like that"

In the first part of this series I described the necessity forced on the YECs at Answers in Genesis to categorize objects as either a-historical or historical. A-historical objects display little evidence of having a history and therefore can be claimed by YECs to have been created as is, “Just like that”. In contrast historical objects carry blatant evidence of their history and therefore in order to preserve the integrity of the creation this necessitates an historical explanation of that evidence. As Ken Ham’s organization, “Answers in Genesis”, wish to maintain a semblance of scientific integrity they at least make an attempt to provide historical explanations of historical objects. But the real problem turns out to be the difficulty of identifying just what does and what does not constitute an historical feature. The following post is a response to Ken Ham’s Mature Creation article here.

To Ken Ham the human body, especially the non-degenerative pre-fall human body, is safely a-historical and therefore can be created as is, “just like that”. According to Ken that is indeed what happened when God created Adam and Eve in Eden. But what about that perennially awkward little feature, Adam’s Navel? Well, OK then concedes AiG, Adam didn’t have a navel (see this link), because if he did it would be a blatantly historical feature and therefore suggestive of a misleading bogus history.

It’s very easy to miss that little feature we call the navel and so perhaps if we look a little closer at the human body we might find other features that have to be “deleted” in order to fit in with mature creation concepts. What about, for example, the telomeres in the genetic code? Telomeres are lengths of repetitive DNA that get shorter and shorter as cells divide and reproduce. In the normal human body cell reproduction must be continuous in order to replace dead cells. Therefore in older people, whose bodies have been refreshed by more cell divisions, the telomeres will be shorter than in younger people. This means that telomere length is a feature that, like the navel, is a tell-tale sign of a history. My guess, then, is that this age indicator would be declared by Ken Ham as absent in the pre-fall Adam. Or would he, alternatively, declare that there is no cell death and division in the perfect human body? I don’t know.

But even a human body that does not degenerate or have a navel is not obviously a-historical. Presumably, pre-fall human beings were intended to give birth to offspring and those offspring would follow a fairly standard history of development that would enable one to estimate the age of a child. Offspring would reach maturity at roughly the same age when, according to Ken, no degenerative changes would take place. Thus, when faced with an adult like Ken’s Adam a minimum age could be imputed based on the ostensive evidence of his maturity; in short, the presence of the human body, which in most cases grows and develops, has an “appearance of history” if not an appearance of age.

The human memory is another case in point. The human mind is built for learning from birth; this would be a redundant feature in the pre-formed Adam. Adam’s ability to handle and articulate the world around him would not be based on learning drawn from a life time’s experience and Ken’s Adam would have none of the proprietary features and idiosyncratic memories arising from a particular educational history. If creative integrity is to be maintained Ken’s Adam could have no memory of a history of learning; that is, no memory of the particular connections, social and physical, in which he learned his skills. Presumably, the Adam envisaged by Ken would have had a straight download of pure knowledge. In contrast natural human skill is inevitably sequenced, and built upon in stages: For example, walking cannot take place without an awareness of limbs. Awareness of limbs cannot take place without at first learning that those visual patches called arms and legs which wave around in front of us can be controlled at will. There is, then, a natural historical sequencing implicit in human cognitive skill that Ken is asking us to overlook.


That Ken is at the edge of a fuzzy category is suggested by the fact that some of the decisions he makes as to what qualifies as a-historical seem rather surprising. In the article I have linked to there is picture of some tree rings. Now, tree rings, as far as I am aware, represent a history of the differential growth of a tree; in effect tree rings are a record laid down by changes in season, weather and sun spot intensity etc. In other words tree rings are indicators of a history. So how do you think Ken should regard tree rings? Is he going to declare that the fully functioning trees in the Garden of Eden had no rings just as Adam had no navel? I was surprised at Ken’s response:

Perhaps trees even had tree rings, as a regular part of the tree’s structure.

If you had asked me in advance I would have guessed (wrongly) that Ken would have opted for “no rings”; which goes to show that it is difficult to anticipate AiG’s decisional fiat as to what categorises as historical and what is a-historical.


While we are on the subject of trees there is, in fact, more subtle evidence that trees are not a-historical. Have a look at the overstated glass fibre “Tree of Life” that graces Ken’s kitsch creation museum. (See picture). Under normal circumstances that gnarled trunk arises out of a history of differential growth. Moreover, consider the fractal nature of a tree’s structure: This may well be because their genetic code effectively contains one of those fractal growth algorithms; in which case the fractal shape of a tree is evidence of something whose form and function is bound up with its method of generation. A tree has the appearance is of something that has grown, something with a history, something that is sequenced: Looking at a tree we can say that one branch was formed after the other as a consequence of the branching dependencies in its growth algorithm. Needless to say Ken has to ignore all this and instead simply assert that trees were created in the garden “as is”, which means that they only have an appearance of history and the ostensible signs of growth are to be regarded as bogus. This is a bit like claiming that a brick wall was created ex-nihilo, when in fact the very rationale of a brick is to facilitate a step by step construction by the human hand - unless of course the wall is in Ken’s creation museum, in which case it is likely to be a façade of glass fibre and thus totally bogus.

One of the issues that troubles AiG is the second law of thermodynamics. Not only is AiG wrong in claiming that evolution violates the second law (something I have dealt with several times on this blog) but of late they have had to admit that it must part of the pre-fall physical status quo, because it is so much part of life as we know it. For example, the human body exploits the second law moment by moment. In assembling biopolymers biochemistry depends on the fact that the solutions in which the assembly takes place are in random agitation and that like Maxwell’s demon the information in DNA effectively selects and locks in the placements of atoms and molecules as and when their random motions should, perchance, find the right position. The raised Gibbs energies of biopolymers results in the cooling of the surroundings, but ultimately the extra energy needed to maintain the ambient temperate derives from the thermodynamic rundown of The Sun, thus preserving the second law. Whatever the origin of DNA itself, the assembly of biopolymers is driven by the second law. Now at one time I think it was current thinking amongst YECs that the second law of thermodynamics was part of the inheritance of the fall (See The Genesis Flood, by Whitcomb and Morris, P224). However, the inseparable integration of the second law with the dynamics of our world have made this view impossible and now we find AiG suggesting that this law is so essential to the running of the cosmos that it is impossible to do without it. In this article AiG note that second law is necessary in digestion, friction, breathing, the heating of the Earth by the sun, heat transfer etc – the list goes on. In short a pre-fall world where the second law doesn’t apply is unimaginable.

But this leaves AiG between a rock and hard place. Now, in principle it is conceivable to imagine a human being whose genetic makeup facilitated the continuous refreshment of the bodily structure, thus preventing aging in the degeneration sense; as long as the information required to maintain the structural status quo was not lost then the human body would not age. However, the AiG admission that the second law is still operative in a pre-fall world means that non-living objects which are, by definition, not refreshed do age in terms of degeneration because they don’t have a regime of constant maintenance. Pre-fall non-living objects would age as required by the second law. Consequently many objects then become historical in as much as they display a history of damage, degradation and energetic run down. For example, large intrusions and extrusions of magma from the Earth’s mantle cool according to the second law. Thus large bodies of granite are obviously not a-historical in as much as they must have a history of cooling. This fact means that AiG cannot claim that granitic intrusions and extrusions were created in a “mature” state. This has obliged AiG to engage these bodies of rock from a scientific point of view and attempt to work out cooling histories for them that are less than 6000 years (See here). It is clear then, that igneous rocks are not a-historical. This then appears to be an admission that some age indicators are reliable and can be used as predictors of age: For AiG are themselves now claiming that age can be determined after all, but, of course, they are desperately trying to show that these age determinations return values less than 6000 years!

Fundamentalists don’t like fuzzy boundaries; they much prefer a clear cut world where it is easy to sort out the sheep from goats. The fundamentalist needs to know who to boo and who to cheer, who to spiritually abuse and who to declare under God’s favour. Accordingly, a clear cut legalistic system of faith test categories is set up in order to help identify and flag unrepentant sinners and Christian apostates. Trouble is, the deeper we get into the notion that God snapped His fingers  and “mature” objects sprang into sight “as is”, the more detail is revealed which blurs the category boundaries. And it gets worse….

…to be continued.

Friday, December 24, 2010

Blind Alleys

I was interested in a statement on this post on UD claiming that evolution is “mathematically impossible”. I eagerly followed the link supplied to find what it was all about. Alas, I drew another blank: The assumption was that the underlying process driving evolution is reproduction and natural selection. This article doesn’t engage the question of the more general and abstract object needed to drive evolution, namely the arrangement of stable structures in configuration space.

Another rather notable post on UD is this one by Cornelius Hunter. He tries to make a distinction between “origins and operation”, a duality which to my mind is rather dubious as it smacks of the subliminal deism in the anti-evolutionist community. I shall be looking at that post in more detail in due course.

Stop Press 29/12/2010
Continuing with my recent theme of mechanism, interventionism and subliminal deism here is post on UD that contains a fairly clear expression of a paradigm at work which dichotomizes physical mechanism and interventionism. It also expresses a strong (and unsupported) assertion of the raison-d’être of the anti-evolution community; namely, the belief that mechanism can’t generate life. (In the article read “frontloading” as “mechanism”). This underlying philosophy of anti-mechanism is a reaction to the perceived threat of deism and the subliminal belief that mechanisms serve a redundancy notice on the "interventionist" God. For that reason anti-evolutionism will have a very strong hold on the minds of many religious people. Since "frontloading" (sic) requires Intelligent Agency it is revealed that UD's position is not primarily one of Intelligent Design, but has more to do with an existential need to see God "re-employed" by giving Him an overt role in the processes of the physical world and re-establish confidence in the "intervening" God.

Stop Press 31/12/2010
This looks interesting and possibly mold breaking: A new poster on UD claiming to be a theistic evolutionist (“of sorts”). The poster shows signs of walking a tight-rope: He is a Theistic Evolutionist, but has come to realize that “Darwinism” is different from “Evolution” and he has little patience for the former (I’d be interested to know how he makes this distinction). He’s got over his hostility to YECs and divine interventions and yet is comfortable with the concept that perhaps no direct interventions took place, if that is indeed the case, because for him the overriding issue concerns design detection; the means of creation are less important to him than the fact that things were designed and created.

Tuesday, December 21, 2010

More on “Self” Organisation

I was honoured to have received a comment from Richard Johns on my blog post about his "Self Organization" paper . I have reproduced Richard’s comment and my reply below. I have also added some further remarks:

Hi Timothy,

Thanks for your detailed treatment of my paper -- actually the best I have seen.

The objection you raise is a good one, although I think it can be answered. (Jeffrey Shallit made the same point in an email to me.)

There are simple (i.e. short) algorithms that can generate irregular strings, as I define them. You mention pseudo-random sequences, and I have previously thought about the digits of Pi. Hence these things are algorithmically simple, despite their irregularity.

As I pointed out to Shallit, I claim that irregular objects are *dynamically* complex, not *algorithmically* complex. And, while it seems certain that dynamically simple objects are algorithmically simple (since dynamical systems can be emulated by computers) the converse is far less obvious. In other words, while the digits of Pi can be generated by a short program, it might not be produceable easily within a dynamical system, from a random initial state.

If a converse theorem does hold, however (i.e. algorithmically simple objects are dynamically simple) then my arguments have gone wrong somewhere. But even in that case, self-organisation theories of evolution will be in a difficult position. For they will then be committed to the claim that living organisms are algorithmically (and dynamically) simple. In other words, living organisms are like Pi, merely *appearing* to be complex, while in fact being generated by a very short program. (Vastly shorter than their genomes, for example.)

Following Richard Dawkins, who coined the term "designoid" for an apparently designed object, one might say that living organisms are "complexoid". While perhaps not obviously false, this view is likely to be very unattractive.

My reply now follows:

Thanks for the comment Richard,

I am not sure I understand what you mean by producing an object *dynamically* as opposed to *algorithmically*. In your paper you seem to be using a cellular automata model to generate your objects. No problems with that except that as far as I understand, cellular automata can be simulated algorithmically and therefore fall under the algorithmic category. Moreover, if Stephen Wolfram’s work is anything to go by (and some of my own work), cellular automata also generate complex “Pi” like patterns pretty quickly – complex in the sense you have defined in your paper; that is they show a high irregularity or “disorder” as I call it.

Now this doesn’t mean to say, of course, that life actually is the product of a clever dynamics, but given both your definition of irregularity and your use of an algorithmic cellular automata model, it seems we are back to square one – we haven’t yet succeeded in eliminating self organization from the inquiry. However, for reasons I have given in this blog entry I can see why a “soft focus” version of your limitative theorem applies. But given that the resources of an Intelligent Designer is within my particular terms of reference, then it seems not outside the bounds of possibility that should the required fruitful self organizing regime actually be a mathematically possibility, then that intelligence is capable of contriving it.

If (repeat if) this is the case then it is wrong to conclude that life must therefore be algorithmically simple for this reason: The space of all possible algorithms, though a lot smaller than the space of all possible configurations, is still a very, very large space as far as we humans are concerned. I suspect (and this is only a hunch) that not any old algorithm has the right Self Organising properties required to generate living things - in which case selecting the right algorithm is then a computationally complex task; that is, life is not algorithmically simple in absolute terms.

One more thing: Imagine that you were given the problem of Pi in reverse; that is you were given the pattern of digits and yet had no clue as to what, if any, simple algorithm generated it. The hard problem then is to guess the algorithm – generating Pi after you have found the algorithm is the easy problem. So to me life remains algorithmically complex even if it’s a product of SO.

SOME FURTHER REMARKS

ONE) I am still not sure just how Richard defines the word “dynamic”. The only thing I have to go on is that he drew his conclusions from a cellular automata model, which I assume is his understanding of a dynamic system (?). Wolfram’s cellular system is effectively just another model of computation and therefore I guess that it would be possible to program cellular automata to calculate Pi; such a program is likely to be “simple” in as much as it is likely to have a short length, a short execution time and start with a simple initial state. Therefore I cannot make sense of what Richard means by suggesting that Pi is likely to be dynamically complex, but algorithmically simple.

TWO) I conclude therefore, that Pi can be “easily” generated within a dynamical system. However, Richard actually says this: “In other words, while the digits of Pi can be generated by a short program, it might not be produceable easily within a dynamical system, from a random initial state.” That last phrase, “from a random initial state”, is crucial as it rescues Richard’s statement: If the algorithm is the sort where the initial state and the corresponding output have a one to one mapping then most initial conditions will not lead to Pi in a realistic time. Therefore a random initial state is very unlikely to lead to Pi being calculated in a realistic time.




THREE) As I have said above a “soft focus” version of Richard’s limitative theorem is valid: If one is selecting algorithms and initial states blindly then you are very unlikely to get the result you are looking for. In other words if you’ve got monkeys programming your cellular automata it’s going to be a “garbage in/garbage out” situation. But if the programming of a dynamical system is in the hands of an intelligent agent and you might just get what you are looking for. Richard has effectively shown that if a dynamical system is to produce a configurationally complex output of a particular kind in realistic time then the dynamical algorithm has to be carefully selected. But it is easy to misinterpret this result: It is certainly not as strong as saying there are absolutely no simple algorithms which can generate particular complex outputs in short execution times; in fact as we know simple algorithms can quickly generate complex output in the sense defined by Richard; that is in the sense of being “irregular” or “disordered”. So, basically we are left with the question I keep coming back to: Do simple “short time” dynamical algorithms exist which are fruitful in their generation of a subclass of complex forms and functions? As far as the question of evolution is concerned then, in a word, Richard’s work doesn’t eliminate Self Organization as a suspect in the inquiry. Richard is not so much wrong in his conclusions as they are easy to misinterpret. However, let me say that in my opinion the term “Self organization” is a complete misnomer: There is no “Self” about it: If evolution works it only does so because the right algorithm has been selected by some transcendent super-context; whether we believe that context to be some impersonal mindless multiverse or an all-embracing, “self explaining” intelligence. (I subscribe to the latter view)

FOUR) However, having made that last statement we have to bear in mind that in the anti-evolution community evolution and ID are likely to be portrayed as mutually excluding; that is, the anti-evolution community perceive this debate as a “evolution vs. intelligent design” dichotomy. Hence they are inclined to eliminate evolution from the enquiry by definition; that is, by defining in advance evolution to be a mindless, blind process (ironically atheists are likely to agree with this characterization of evolution!). Thus any suggestion that evolution works by way of the clever selection of a dynamical algorithm is considered an oxymoron. We can see this “definition in advance” at work in Richard’s paper: He starts by presuming that the agent selecting the program of the cellular automata to be blind and thus effectively lacking in intelligence. Not surprisingly if Richard is going to employ chimps to select his algorithm he’s going to get out bananas.
"I only employ chimps, but to get a result I need a lot of them"

FIVE) Unless one subscribes to some kind of multiverse/infinite trial system, even bog standard evolution, if it is to work, must be resourced by improbable preconditions. This improbability has the effect of triggering Demsbki’s design detection criterion and thus in the abstract evolution has the imprimatur of Intelligent Design. However, the ID community represented by Uncommon Descent are defacto anti-evolutionist and therefore they will do their damndest to try and show evolution to be a mindless process that simply doesn’t work; anyone who so much as entertains evolution as a viable “self organizing” candidate is likely to be accused of courting “naturalism” and perhaps even accused of failing a crucial faith test. The irony is that they, along with the militant atheists, seem to have subliminally bought into the deistical intuition that “well oiled” mathematical mechanisms need no divine support or perhaps not even a divine initiator. (See my blog post here)

Friday, December 17, 2010

Beyond Our Ken: On Mature Creation. Part 1


"I’d love a bit of Apatosaurus but all we get is ham, ham, ham"

I have been looking into the concept of “mature creation” as propounded by the Young Earth aficionados on the Answers in Genesis web site. Reading some of their web pages it becomes clear they much prefer the terms “mature creation” or “functioning creation” to “appearance of age”. They don’t like their supporters talking about an “appearance of age” because, for reasons we are well aware of, AiG don’t believe in a cosmos of great age and therefore won’t accept that the universe even “looks” old. As Ken Ham says (See here)

“By saying the universe looks old, you are trusting that dating methods can give us an apparent old age for the universe—but they can’t.” Let me explain. When people say the universe has “apparent age,” usually they are assuming, for whatever reason, that the universe “looks old.” I have often found that, unconsciously, such people have already accepted that the fallible dating methods of scientists can give great ages for the earth. So if they believe what the Scripture says about a young universe, they have to explain away this apparent great age.

What Ken is saying here is that he completely distrusts any dating methods. That’s really not news, of course - if he did trust them he’d be out of his job as AiG supremo. Given that few, if any, dating methods return Earth’s history to be anything as short as 6000 years it is no surprise that AiG policy is to do its damndest to undermine all dating methods. Rather than propose a reliable physical dating method themselves, AiG science is the science of negation. However, later on in the same article Ken goes on to show that in his mind the appearance of age is not just loaded with the concept of a chronological age:

When doctors look at the human body today, they can estimate age from various evidences in the body. But before sin, nothing aged—everything was created “very good.” The human body did not experience the effects of sin or aging.
What would a doctor from today’s fallen world say if he looked at Adam and Eve’s bodies just after they were created? This doctor would be very confused. Such perfect bodies would show no degenerative aging, and he would be shocked to learn that these adults were less than a day old.

Here, in addition to chronological age, Ken is loading the term “age” also with the idea of degeneration. Ken, of course, believes the process of cosmic degeneration originates from Adam’s sin. So, when he says “before sin, nothing aged” he means that “before sin, nothing degenerated”.

In introducing the concept of degeneration Ken is actually clouding the argument because clearly evidence of an object having a history does not necessarily imply degeneration. For example, all human beings today have navels – a navel is not in itself evidence of temporal degeneration of the human body but simply evidence of its history – namely, that each individual was formed in the womb. So we must distinguish between degenerative age and Chronological age. Regardless of whether they degenerate or not objects have a history. Two questions then naturally arise: Firstly, do objects with a history carry evidence of that history? Secondly can we quantify that history in units of time? The answer to these questions may be “yes” or “no”: Some objects carry more information about their history than others. Some objects have lots of information about their history even to the extent that we may be able to quantify their age. Others have little or no information and to all intents and purposes are a-historical.

Given that some objects display inherently historical features and others are a-historical, then we find that this fact determines the policy of AiG: Objects which contain little or no historical information means that AiG can claim that they were created as fully functioning objects, whereas objects that have blatantly historical indicators forces AiG to endeavour to reconstruct a history (< 6000 years of course) to explain these indicators.

As I have said before we must be grateful for small mercies. The AiG policy is at least an improvement on those forms of Young Earth Creationism that have no qualms about the ex nihilo creation of objects with a bogus appearance of history: Notorious examples are those YECs who claim that the light from the stars was created in mid flight and that fossils were created in situ. As a rule AiG shies away from this sort of approach because it clearly impugns the integrity of the created order and shows little respect for it. At least AiG haven’t become so uselessly spiritual that they have adopted a comprehensively anti-science stance and retreated into a fideist ghetto of spiritual ultras. But the trouble is, as we shall see, some objects don’t easily slide one way or the other into the historical and a-historical categories; like most real categories the boundaries are fuzzy.

...to be continued

Evolution: (Not) Wanted Dead or Alive

Further to my recent posts on luddites, mechanism, and evolution, here’s another post  on Uncommon Descent indicating the anti-evolutionist’s timorousness toward any suggestion that “Natural” mechanism may be a source of form and function. Quoting the salient points:

Rene Descartes … urged evolutionary ideas because of the evident power of natural law. Yes god created humanity, but individuals are born and grow according to law. From people to plants, we observe incredible development brought about by nature. So too, the continental rationalist argued, we should understand the origin of the world as strictly naturalistic as well.

Decades later the influential Thomas Burnet showed how the Cartesian view is theologically mandated. Rather than creating a clock that doesn’t work and needs constant adjustment, the greater clockmaker makes a clock that works by itself. Likewise, the Anglican cleric argued, the greater god makes a world that operates on its own.

Cornelius Hunter, the author of the post, spells out the much dreaded deistical scenario:

And these machines assemble and operate according to natural law—there is no vitalism here, no divine finger adjusting the cogs and turning the crank.

This then is the threat posed by evolution in the minds of the anti-evolutionists and yet at the same time the great joy of the militant atheists: Evolution in the first instance appears to put God out of work, therefore the next logical step is to question whether He was ever there in the first place to be put out of work. Whether evolution works or not the anti-evolutionist doesn’t want it. Whether it works or not the militant atheist wants evolution. Hence for the anti-evolutionist evolution must go at all costs whereas for the atheist it must stay at all costs. It is doubtful in my mind whether such impassioned protagonists can handle this subject with sufficient detachment to arrive at a useful conclusion.

Wednesday, December 15, 2010

The Impasse

These two posts one by PZ Myers and one by Paul Nelson bring out the key issue that separates the evolutionists from the anti-evolutionists. The critical issue is one I have made a lot of on this blog; namely, irreducible complexity versus reducible complexity. This question is about how our world is made in the abstract realm of configuration space; given our particular physics just how is the class of stable organic forms arranged in configuration space? Are these forms closely packed enough to allow random evolutionary jumps between them or are they too widely separated thus effectively blocking any possibility of evolution? It’s notable that both parties manage to address the same subject without appearing to be cognizant of this question. Configuration space is a huge, complex, abstract and perhaps mathematically intractable object; PZ Myers and Paul Nelson are like insects crawling around on the face of a mountain, unable to see the broad sweep of the landscape for what it is. Both parties need to humbled by the immense platonic landscape in which they are embedded.

The anti-evolutionists can’t finally prove that biological structures are irreducibly complex or be cognizant of the non-linearities that may lie in configuration space which could give rise to sudden evolutionary spurts. And yet the evolutionists can’t present a full suite of evidence showing that biological structures are reducibly complex. This argument is going to run and run.


You don't have to be religious to use religious logic: Just reconstruct this cartoon using either "reducible complexity" or "irreducible complexity" instead of "baseball".
With acknowledgements to:

Friday, December 10, 2010

Beware Luddites at Work.

In this post I developed the idea that many anti-evolutionists share the atheist's view that the “evolution machine” puts God out of work. Here is the relevant passage from that post:

The underlying ideas driving this kind of thinking are a very anthropomorphic; gone is the idea that God is so totalizing an entity that He is an environment, but instead God is imagined to be in an environment - almost to an extent reminiscent of the Grecian view of gods, gods who have very human attributes and live in a very human environment. The picture is of a God who, much like a human artisan, one day creates a cosmic sized mechanism that once running needs little sustenance, and which he can then walk out on and leave to manage itself.

Ironically the fundamentalists and anti-evolutionists share in this mindset; they have a sneaky suspicion that the atheists are right and that somehow mechanism, like the machines of the industrial revolution, is likely to put people out of work - even a cosmic designer. Anxious therefore to have a God who doesn’t put himself out of a job they downplay the abilities of mechanism to generate form and variety. It is no surprise then that for fundamentalists and anti-evolutionists using mechanism to explain life is bad, bad, bad, whereas using Divine fait is good, good, good.

There is a strong common gut feeling that the “Law and Disorder” mechanisms of modern physics betoken a regime that can function apart from the presence of God. The underlying anthropomorphism inherent in this form of deism is not only at the root of atheist thinking but also, ironically, not far away in anti-evolutionist thinking.

Now here is a passage taken from this post on Uncommon Descent that is the perfect illustration of what I mean:

Many Christians who say they believe in “Darwinism” do not understand what they are saying. They believe that God created through evolution and was involved in the process and guided it through to completion. They do not understand that “Darwinism” properly understood rejects the very view they hold. A Darwinist believes that the combination of natural law and random variation are sufficient to account for the origin and diversity of life without any guiding intelligence from God or anyone else. They believe that the human body is the result of a process that did not need God any more than a stone rolling down a hill needs God. Very often, therefore, the issue is not whether a Christian can believe Darwinism, but whether a Christian can hold a mistaken belief about Darwinism.

Darwinism, properly understood, is dangerous to all religious belief. It truly is, in Dennett’s phrase, a universal acid, and faith is one of the things that acid dissolves. It is for a very good reason that Dawkins famously proclaimed that Darwin made it possible to be an intellectually fulfilled atheist. And we see a strong correlation between the rise of Darwinism and the decline of religious faith, especially among the so-called intellectual elite. Belief can be very inconvenient when that belief places constraints on the sovereign will. Darwinism helps people throw off those constraints.

So the author of this post has imagined that it is possible posit processes that don't need God. Even the Athenian poets understood the error of this thinking: "...for in him we live and move and have our being". (Acts 17:28) As I said in my previous post I wouldn't say that I’m a 100% convinced by standard evolutionary theory (Caveat: that may be down to my ignorance of the details of evolutionary theory) but I stand by it partly because of some of the crass philosophy (and theology) one finds amongst the anti-evolutionists. As Nietzsche said:

Sometimes we remain true to a cause simply because its opponents are unfailingly tasteless. (or stupid – ed)

Wednesday, December 08, 2010

Anti-Evolution vs. Intelligent Design

Let's be upfront about what we stand for

This short post on Uncommon Descent contains the first reference I have seen on UD to the words “anti-evolutionist”. The inference is that UD is being identified as an anti-evolutionist web site rather than just an Intelligent Design web site. As I have maintained on this blog before, in popular parlance “Intelligent Design” now evokes anti-evolutionist connotations: This practice is misleading given that Christian evolutionist John Polkinghorne claims to be a Intelligent Design Creationist. If UD is primarily an anti-evolutionist site and secondarily an Intelligent Design site, then to date UD have done little to make this clear. I haven’t yet seen many posts on UD acknowledging that if evolution is to work it must be resourced by highly improbable preconditions thus flagging up Dembki’s ID detection  criterion and making it an ID candidate. If UD really stood by ID first and foremost they would be able to make peace with evolutionists like John Polkinghorne; but no, the agenda of UD is now cast in stone and probably too mixed up with right wing politics and Young Earth Creationism to make this possible. I would love to be proved wrong on this score.

Anyway, if from now on UD starts identifying itself as an “anti-evolutionist” web site we will know where they stand. But somehow I don’t think this will happen because they will want to continue to portray themselves as representative of the exclusive and one and only authentic ID community and therefore have an interest in making out  ID to be necessarily anti-evolution. They will therefore continue to misrepresent evolution as a necessarily informationless process that pretends to be able to create information.

Monday, December 06, 2010

God, Theology, Evidence and Observation

PZ Myers has an an interesting blog post on Terry Pratchett who is suffering from the first stages of Alzheimer’s disease. This is what PZ says:

The casual cruelty of nature is one example of the absence of a benevolent overseer in the universe. For another, I'd add the fact that Pratchett has been afflicted with a disease with no cure, of a kind that will slowly destroy his mind. We're left with only two alternatives: that if there is a god, he's insane or evil and rules the world with wanton whimsy; or the most likely answer, that there is no such being and it's simple chance that leads to these daily haphazard catastrophes.

PZ Myers’ comments remind me of Darwin’s loss of faith, a process that, I believe, was progressive and apparently linked to his observations of nature and above all to the tragedy of his daughter’s death. Now, let me say this: PZ has my full sympathies for what is not an unreasonable conclusion; even those of us of faith are challenged by the age old problem of suffering and evil especially when it is close to home. The number of times this challenge has lead to a loss of faith in the faithful is uncounted. So let him who is without sin cast the first stone at PZ.

But let’s get this clear: Firstly the above statement by PZ is overtly theological; it works from the presumed nature of God, His moral obligations and how He relates to the world. It is a counterfactual argument based on what PZ concludes should not exist if a benevolent God exists. Secondly, the argument being used is not entirely metaphysical and non-observational as clearly PZ is making a comparison between his implicit concept of God and his observations of the cosmos. Ergo, God is an entity for which there is observational evidence relevant to His existence or non-existence.

Is PZ’s alternative belief in what he refers to as “simple chance” a dynamic which provides a deep metaphysical explanation for the way things are? Seemingly not: Chance is a conceptual derivative of disordered patterns. Thus “simple chance” is little more than a name for a particular class of pattern. What I think PZ really means here is that he can’t believe a benevolent overseer would allow such patterns, patterns which are the source of “haphazard catastrophes”. Once again a theological argument is being invoked to draw conclusions about God’s existence or otherwise, based on observed patterns.

In conclusion PZ says “the most likely answer, [is] that there is no such being [as God]”. Does this mean that he has not entirely closed the door on God? In some ways it might be good thing if PZ does close the door completely, because the only God he can think of must be "insane, or evil and rules the world with wanton whimsy."





God suffers with man: Christianity hits the problem of innocent suffering head on

Wednesday, November 17, 2010

Luddites and The Evolution Machine

When Newton’s “clockwork” universe was “discovered” shortly before the 18th century, European culture had been familiar with mechanical clocks and automata of increasing sophistication for over 400 years. Once primed with their burden of potential energy and released, the ordered yet complex motions of clocks and automata continued without further human intervention - its maker could then walk away and let it run. These clever pieces of engineering provided the prototypes of a new paradigm and they have been used as an instructive model by philosophers ever since. Moreover, early anatomical studies helped the paradigm along.  It was natural enough, then, for the interpreters of Newton to use the automaton as an analogy to draw conclusions about the relation of God to His creation. In particular, the notion of a Deistical God who built the universe in the manner of a human engineer and then left it to its devices is a notion still very much with us today. The advent of Quantum randomness didn’t change the picture very much either: Probability, like the tossing of a coin, is also subject to mathematical laws; moreover everyday observations suggest that the disorder of randomness is often a sign of the absence of intelligent management. Consequently mathematical randomness is inclined to subsume under the heading of mechanism. Nowadays those physical mechanisms are portrayed as being so good at creating variety and form that many doubt that a Divine creator and sustainer is needed at all; the successes of mechanism in generating patterns may prompt the idea that somehow mechanism can even “self create”; such a notion may in the final analysis be unintelligible, but it is probably at the back of some people’s minds. The upshot is that the concept of the cosmos as a grand logically self sufficient mechanism is now so embedded in our consciousness that many effectively say of God “I do not need that hypothesis”

The underlying ideas driving this kind of thinking are a very anthropomorphic; gone is the idea that God is so totalizing an entity that He is an environment, but instead God is imagined to be in an environment - almost to an extent reminiscent of the Grecian view of gods, gods who have very human attributes and live in a very human environment. The picture is of a God who, much like a human artisan, one day creates a cosmic sized mechanism that once running needs little sustenance, and which he can then walk out on and leave to manage itself.

Ironically the fundamentalists and anti-evolutionists share in this mindset; they have a sneaky suspicion that the atheists are right and that somehow mechanism, like the machines of the industrial revolution, is likely to put people out of work - even a cosmic designer. Anxious therefore to have a God who doesn’t put himself out of a job they downplay the abilities of mechanism to generate form and variety. It is no surprise then that for fundamentalists and anti-evolutionists using mechanism to explain life is bad, bad, bad, whereas using Divine fait is good, good, good.

There is a strong common gut feeling that the “Law and Disorder” mechanisms of modern physics betoken a regime that can function apart from the presence of God. The underlying anthropomorphism inherent in this form of deism is not only at the root of atheist thinking but also, ironically, not far away in anti-evolutionist thinking. For example in this blog entry on Uncommon Descent we are presented with an apt metaphor for this sentiment; we hear of “Darwin’s unemployed God” as if, as I have already said, God is a frustrated divine artisan in some Greek myth. It is this sort of anthropomorphic outlook which, I submit, means that Richard Johns paper on self organization appeals to anti-evolutionists. (See my blog post here). Johns’ thesis tries to show how a law and disorder package cannot effectively be the creator of complex variety and form. This (false) conclusion will find a very receptive audience amongst the anti-evolutionists, because so many of them have a subliminal deistical view that although law and disorder needs little or no divine management, it nevertheless has no right to compete with God as the creator of form and variety. They are therefore anxious to downplay the role of physical mechanism and re-employ God as the miraculous intervener by positing a world that requires large dollops of arbitrary divine fiat.

At the start of his paper Johns uses a polemical method that helps assist the anti-evolutionist view of minimizing the role of mechanism. Johns hamstrings any chance that self organization might have been the cause of evolution by explicitly excising any intelligence that could be used to select a dynamical system which could favour evolution. He effectively posits a know-nothing agent who is completely blind to an overview of the situation and thus has no chance of selecting the dynamics that gives self organization a chance. He is not only tying the boxer’s hands behind his back but he has blindfolded him as well. Ironically Johns is kicking God out of His own creation

In the anti-evilutionist and fundamentalist mind evolution is defined by the absent of contrivance, so it is no surprise that their version of evolution fails to work and that they regard themselves as the only purveyors of authentic intelligent design. They don’t believe that theistic evolutionists can be serious about intelligent design, and as I have already indicated, their subliminal deism will lead them to accuse Theistic Evolutionists of giving God his redundancy notice. That’s in spite of the fact that an evolutionist like Sir John Polkinghorne claims to be an intelligent design creationist. It’s little wonder then that people like John Polkinghorne are not sympathetic to the anti-evolutionists. Neither am I. I wouldn’t say that I’m a 100% convinced by standard evolutionary theory (Caveat: that may be down to my ignorance of the details of evolutionary theory) but I stand by it partly because of some of the crass philosophy one finds amongst the anti-evolutionists. As Nietzsche said:

Sometimes we remain true to a cause simply because its opponents are unfailingly tasteless. (or stupid – ed)

Sunday, November 14, 2010

YEC Star light Travel Time: If at first you don’t succeed…


Judging from this article over on the Institute of Creation Research Young Earth Creationist Russ Humphreys is still beavering away on his geocentric cosmology. The article is dated 1st Nov 2010 but it points back to an article here which in turn refers to some 2007 work by Humphreys where he attempts to perfect his model. I had a look at Humphreys ideas in my blog post here, but I haven’t seen this later work. Essentially Humphreys idea involves postulating a finite asymmetrical big-bang-like event with the Earth near the centre. His hope is that the resulting space-time metric generates enough time dilation in the vicinity of the Earth to slow down time to the extent that only about 6000 years  passes in the Earth’s locale since creation.

One of the problems in Humphreys earlier work, the article claims, is that his models “Did not provide enough time dilation for nearby stars and galaxies,”. Humphreys later models, I gather, attempt to address this problem. This difficulty actually brings out just how geocentric Humphreys model must be: Problems start little by little for the YECs at a mere 6000 light years from Earth. At all distances greater than that light is hard pressed to reach is in time unless the “Humphreys effect” kicks into action in stages for astronomical objects seen beyond that distance. A distance of 6000 light years covers only a small fraction of own galaxy let alone the wider environs of the cosmos. Thus, the Earth, according to Humphreys, must lie very precisely at the centre of his cosmic model. In fact I think that Humphreys is effectively claiming that the Voyager gravitational anomaly may be evidence of the special status of Earth’s locale.

I suspect that Humphreys ideas will come to grief in time. If the “Humphreys effect” can be thought of as a concentric gravitational field with the Earth centrally placed, then the fact that the metric of that field must at some point in the past have very steep differentials within a few thousand light years from Earth’s locale would, I guess, considerably distort the shape of our own galaxy. Off the top of my head let me just say that I’m not aware astronomical observations support such a conclusion.

Anyway, Humphreys is at least showing some respect for science and the integrity of the created order by taking this approach; that is, he is trying to craft a creation narrative that doesn’t make recourse to large dollops of arbitrary Divine fiat. His type of approach is probably the best bet for the YECs - although many YECs may feel uneasy about the fact that according to Humphreys  billions of years of history passes in most of the cosmos implying that there are likely to be planets out there with an “old Earth” geology. As for the ideas of YEC creationist Jason Lisle they are best binned and forgotten about.

Friday, November 12, 2010

Some Light Relief: Trench Humour on Remembrance Day.

The video below is absolutely priceless: I would normally publish this sort of thing on my Views News and Pews blog but I thought that Quantum Non-Linearity was in need of some lightening up:


Screams of laughter with Pastor Kerney Thomas

Perhaps I have a strange sense of humour but I was legless after watching this guy. He really does seem to be real and not just a spoofer. Who would have guessed that somebody out there would have come up with this one and provided us with such an unexpected idiosyncrasy for us to gawp and wonder at! Think of the sheer tragedy of our world with its wars of mass destruction, numerous sufferings and evils .... and yet in the midst life’s deeply serious trench warfare it suddenly throws up this sort of peculiar, asinine, frivolous inanity which contrasts so gloriously against the backdrop gravitas of life’s affairs. For a while it breaks the tension of contention and causes roars of laughter.

“There’s nowt so queer as folk” as the saying goes. Some people say that “You can’t make this stuff up!” Well, I once had a jolly good go at making it up:

http://viewsnewsandpews.blogspot.com/2007/05/fragrant-flatulence-blessing.html
http://viewsnewsandpews.blogspot.com/2006/10/moshing-mayhem-coming-to-your-church_13.html
http://viewsnewsandpews.blogspot.com/2006/12/signs-and-blunders.html
http://viewsnewsandpews.blogspot.com/2006/10/soul-searching.html
http://viewsnewsandpews.blogspot.com/2006/11/dont-try-this-at-home.html
http://viewsnewsandpews.blogspot.com/2006/11/plain-truth.html
http://viewsnewsandpews.blogspot.com/2007/10/death-by-sermon.html
http://viewsnewsandpews.blogspot.com/2006/12/signs-and-blunders.html




Wednesday, November 10, 2010

Self Organisation


According to my reading of Richard Johns, this complex fractal landscape can't happen

As promised in my last two blogs here is my post giving a more detailed look at Richard Johns’ paper on self organization. The central part of Johns’ paper is his “Limitative Theorem”, in which he states:

A large, maximally irregular object cannot appear by self organization is any dynamical system whose laws are local and invariant.

Understood in its most literal sense the above statement by Johns is false. But, to be fair, it does rather depend on how one interprets the terms in Johns’ formulation. Interpreting these terms using excluded middle “true or false” logic, then Johns’ central thesis is beguiling. However, if interpreted in the fuzzy terms of probabilities which deal with typical cases rather than special cases then Johns is seen to have point.

***

If you toss a coin many times you end up with a sequence of heads and tails that can be conveniently represented by a long binary pattern of 1s and 0s. This pattern of 1s and 0s can be broken down into a contiguous series of segments of any chosen length of, say, n tosses. The probability of finding a segment containing k 1s is then given by Bernoulli’s formula :


where p is the probability of heads being thrown and where:


A corollary of this formula is that segments with equal numbers of 1s have equal probability. For example with n = 10 a segmental configuration such as 1010101010 is as equally probable as say 111110000 or 1000101011.

Bernoulli’s formula is most likely to be thought of as device for statistical prediction in as much as it can be used to calculate the expectation frequencies of segmental configurations like 1000101011 or 10101010 etc. The reason why the formula works is because the number of possible sequences that can be constructed from 1 and 0s with segmental frequencies consistent with the formula is overwhelmingly large compared to the number of sequences that deviate from the Bernoulli expectation frequencies. Thus, given that a sequence is generated by a chance source then it is clear that a member taken from the class of sequences conforming to the Bernoulli expectation values is going to be a highly probable outcome.

But statistical prognostication is not the only way that Bernoulli’s formula can be used. It is also possible to use the formula as an indication of the configurational complexity of a sequence. To do this we take a given sequence of 1 and 0s and enumerate the frequencies of its segmental configurations. If the sequence so enumerated returns frequencies consistent with the Bernoulli values then that sequence is deemed to be maximally complex. If, given a set of enumerated segmental frequencies, the number of possible sequences consistent with that enumeration is represented by Z, then Z is maximised for a Bernoulli sequence. If, on the other hand, a sequence deviates from the Bernoulli expectation values, then correspondingly Z will deviate from a maximum value. The departure of Z from its maximum is a measure of the departure of the sequence from maximum complexity. I have dealt with these matters more rigorously elsewhere, but for the purposes of this article we can now get a handle on Richard Johns’ paper. Johns’ paper makes use of something he calls “irregularity”, a concept closely related, if not identical to the concept of configurational complexity that I have defined above: Johns breaks his patterns up into small chunks and then defines the concept of irregularity as the number of possible configurations consistent with the frequency distribution of the segmental configurations.

Johns’ paper is an exploration of the relationship between configurational complexity (or “irregularity” as he calls it) and computational complexity. Computational complexity, as Johns understands it, is measured in terms of the execution time needed to reach a pattern; in particular he is concerned with the time needed to generate configurationally complex patterns. Johns arrives at the conclusion - as stated by his Limitative Theorem above - that maximally irregular patterns cannot appear as a result of self organization. What he actually means by “cannot appear” is that maximally irregular patterns cannot be produced in what he calls a “reasonably short time”. Thus, according to Johns, complex irregular patterns are computationally complex – that is, they need large amounts of execution time.

A few moments reflection reveals that there is a correlation between computational complexity and configurational complexity as Johns submits. This correlation arises simply because of the sheer size of the class of configurationally complex patterns. The value of Z rises very steeply as the segmental frequencies move toward the Bernoulli values. In fact the value of Z far outstrips the number of steps that can be made in a realistic computation. Thus, if we imagine a computation taking place it is clear that given realistic execution times any computation, whether controlled by a deterministic algorithm or some aleatory process, is only going to be able to visit a relatively small number of configurations that make up the huge number Z. Thus, it is fairly easy to see that the chances of a computation generating an arbitrarily selected complex configuration in a realistic time is very small. This, then, is essentially Johns’ Limitative Theorem: An arbitrarily selected irregular or complex configuration is very probably going to have a high computational complexity.

But we must note here that “very probable” doesn’t rule out the possibility of improbable cases where self organization appears to take place in a realistic time. In fact at the start of his paper Johns declares that cases where self organization depends on starting from “fine tuned initial conditions” breaks the rules of his game and therefore doesn’t classify as self-organisation. The big problem with Johns’ formulation is over the meaning of his terms; in some people’s books an example of a complex pattern that makes a surprisingly quick appearance from particular initial conditions would still classify as self organization.

Johns’ concept of irregularity, as we have seen, is closely related to the kind of patterns generated by chance processes such as the tossing of a coin. Johns “irregular” patterns and those patterns produced by aleatory processes both have high values of Z. That is why I actually prefer to call them “disordered” patterns and refer to the quantity Z as “disorder”. Given this equivalence between Johns’ concept of irregularity and my concept of “disorder”, it is easy to think of examples where an irregular object is generated in a reasonably short time, apparently violating Johns Limitative Theorem: I am thinking of cases where elementary algorithms generate pseudo random sequences in polynomial execution times; such rapidly generated sequences show a high value of Z. But given that there is a certain amount of doubt about the way the terms of Johns’ Limitative Theorem should be interpreted it is difficult to know whether this example can be considered to violate his thesis. Is a blank memory and a simple algorithm to be considered as “fine tuned initial conditions”? In addition we know that when Johns says in his Limitative Theorem that self organization “cannot appear” he is not speaking in absolute terms but actually means that self-organization cannot appear in reasonable execution times. So perhaps we should rewrite Johns’ Limitative Theorem as follows:

A large, arbitrarily selected, maximally irregular object is unlikely to appear in reasonable time by self organization in any dynamical system whose laws are local and invariant.


This more measured form of Johns’ principle allows for the rare cases where self organization happens. For example, we know that some elementary algorithms can reach a limited subset of irregular patterns in a relatively short time, even though it is mathematically impossible for such algorithms to reach all irregular patterns in reasonable execution times. It is precisely this kind of exception to the rule that leaves the whole issue of self organisation an open question. Therefore as far as I’m concerned Johns’ paper has not eliminated evolutionary self organization from the inquiry. In fact it’s the same old, same old contention that I have raised time and time and again on this blog; complexity is not in general a conserved phenomenon; complexity can be generated from simple symmetrical starting conditions in polynomial times. I have never seen satisfactory cognizance taken of this fact in the anti-evolution community. In fact as I have suggested before, it is ironic that their very positing of a fabulous creative intelligence subtly subverts their anti-evolutionism; for if evolution is an example of self-organization arising from the right regime of generating laws then we are dealing with a very rare case – a case that the level of creative intelligence envisaged by the anti-evolutionists is presumably more than capable of contriving. With breath taking irony the anti-evilutionist's attempts to show the impossibility/improbability of evolution fail to connect the evolutionary conundrum with a concept that looms large in their culture and which stands impassively behind them: Namely, Divine Intelligence!

Some Notes on Complexity.
Although Johns rightly points out that biological structures such as DNA score high on the irregularity scale, irregularity (or “disorder” as I prefer to call it) does not capture an important characteristic of biological complexity and therefore fails to bring out the real reason for the relative computational complexity of these structures. Let me explain:

In one sense the class of disordered sequences is actually computationally elementary: For example imagine the problem being posed of generating any irregular/disordered pattern. The simplest way of solving this problem is to toss a coin: Using this method of “computation” we can arrive at an irregular/disordered pattern in as short a time as it takes to toss the required number of tosses. So let me suggest that a better measure of complexity than the bare value of Z is given by this equation:

Complexity, C ~ Log (Z/W),

where Z is the disorder of the case sort for and W is the size of the specified class whose members all classify as a “hit”. For example, if we specify that we require any maximally disordered pattern then W ~ Z and so C ~ 0 ; which expresses the fact that this particular problem is of low computational complexity, because simply flipping a coin is likely to produce what we want. But, on the other hand, if we specify that the pattern generated is to be a self perpetuating machine it is very likely that Z will be large and W small, thus returning Log(Z/W) ~ large. Hence, the point I am trying to make is that what makes living things computationally complex is not so much their configurational complexity as measured by Z, but rather a combination of configurational complexity and their rarity.

But even this measure of complexity may only be apposite for typical conditions. If there is such a thing as a class of self perpetuating complex patterns that can be arrived in polynomial time from some basic physical algorithms then it is likely that the number of basic physical algorithms is actually smaller than the configurational complexity of the patterns they produce; in which case the computational complexity of these patterns would be smaller than their configurational complexity! In fact, although we don't actually get a "free lunch" here, we at least get a fairly cheap lunch!

Whether or not actually existing biological structures are a rare case of evolutionary self organization, I nevertheless suspect that there are incredibly complex self perpetuating structures out there in platonic space that cannot be reached by evolutionary self organization. Just imagine what they must be like!

Sunday, October 24, 2010

Richard Johns vs. Larry Moran

I spotted this piece of street art whilst walking through Norwich. I photographed it because in a crude way it illustrates the nature of physical explanation as I understand it, particularly in physics. We’ve heard about the the “Turtles all the way down” scenario; here an infinite regress results as the demand to explain the explanation leads to a decked succession of objects that fail to converge on any final explanation that could be said to have some semblance of being a natural end point. In contrast my photograph is a way of conveying, using another rather woolly zoological metaphor, that explanation in physics does involve a progression of object that shows some kind of convergence: As is often remarked, physics succeeds in showing how the complex can be generated from the relatively simple and as physics advances the complexity of the cosmos is reduced to elementary explanatory objects. In my street art metaphor this reduction in complexity is expressed by the diminishing size of the supporting mammals, but as this is a rather loose metaphor a type violation is allowed and the final explanatory object is represented by a ball – something mathematically simple, elemental and above all irreducible. As I have affirmed many times in this blog, physics, in the final analysis, is a science of description that gets its explanatory purchase on nature simply because nature is presumably highly organized. This means that if the cosmos is a closed system and is appropriately organized it will be amenable to a succession of increasingly compressed descriptions. It is the destiny of physics, therefore, to eventually come up against a logical barrier beyond which it cannot proceed; for there comes a point when the explanatory “compression” can go no further; any further attempt to explain the explanation simply leads to the “turtles all the way down” effect. Thus, if physics is to lead us to a final theory it is destined to ultimately leave us at a barrier of incompressible brute fact. Having then arrived at this kernel of fact its job is finished. (If indeed such is possible – it may not be possible as nature could conceivably be mathematically open ended)

The news conveyed by my metaphor is worth putting beside the views of two people: Firstly Richard Johns, who is probably an intelligent design creationist and whose paper on the limits of Self Organisation was mentioned in my last blog entry. My second client is the uncompromising evangelical atheist Larry Moran.

Richard Johns: In his paper on self organization Johns arrives at a conclusion that for many ID theorists is the anti-evolution community’s shibboleth of authentic ID; namely, that there is some kind of conservation law of complexity and/or information that means you can’t have “ a free lunch of complexity”. According to this view the presence of complex organisms must have its origins in more or less equally as complex precursors. Using the zoological metaphor the anti-evolutionist’s view is that one type of elephant can only be supported on another type elephant; so, either you end up with “elephants all the way down” or you believe that at some point God created the first elephant. In other words anti-evolutionists are unlikely to favour the view expressed by my Norwich street art.

I have read Richard Johns paper on the limits of self organization and although I haven’t properly completed my analysis of it let me say in advance that am I not entirely happy that the conclusions of this paper are robust. I will, in due course be giving a detailed analysis of the paper, but suffice to say here that Johns’ reasoning does not take into account that a relatively small subset of complex forms can be generated by simple algorithms. Johns appears to have been mislead by the fact that the overwhelming majority of complex forms can only be reached algorithmically if either the algorithm is executed for a prohibitive amount of time or has available complex initial conditions on which to work. Johns’ reasoning certainly applies to the large majority of configurations and therefore it looks as though Johns has been too easily satisfied by the fact that in terms of probabilities his argument works: Because probabilities favour the situation where complexity arises only from complex initial conditions or after impractically long execution times, then probabilistically speaking Johns is right. But he then fails to take into account the rare cases where simplicity to complexity is possible. It is, presumably, well within the capability of a Divine intelligent designer to contrive one of these rare cases of simplicity to complexity in realistic execution times. Thus, given that Johns is a suspected ID creationist it is ironic that it is the very concept of a Divine designer that in the final analysis raises doubts over this line of argument as a way to ease through anti-Darwinian sentiments.

Larry Moran: In this post Larry responds to some questions posted on the Anti-Darwin web site “Evolution and News”. The first three questions impinge on the subject matter of this post and so I have reproduced these questions and Larry’s answers below. That in the final analysis physical explanation is destined to come up against a fundamental logical barrier is perhaps indicated by Larry’s responses: He looks a little bit like a baffled man who is shifting his weight from one foot to other.

1) Why is there anything?

I don't know and I don't really care. I'm quite happy to think that something has always existed but I'm not troubled by the fact that our space-time may just be an accident.

Well, I do care and I am more than a little curious about ultimate origins and do my best to think past the barrier of physics’ incompressible kernel of explanatory information. If this requires some nifty and reflexive philosophical footwork then so be it. Besides, a philosophical frame of mind probes the meaningfulness or otherwise of linguistic forms like “space-time may be just an accident”; it’s a frame of mind that doesn’t accept an appeal to randomness as a pretext to shelve the problem. For something to be an accident we have to embed it in some higher context in which the event can be judged as accidental; for example, throwing a six on a die is an “accident” within the higher context of the physical circumstances surrounding the throwing of the die. In follows then that “accidents” require some sort of physical context, and this of course raises questions about the origin of this context. The spectre of “Turtles all the way down” is haunting us once again.

So just what explanatory status does Larry’s “accident” have other than as a pretext for dismissing the problem with a bit of hand waving? Related to this particular form of hand waving is the common misconception that if space-time is some kind of inflated quantum vacuum fluctuation then the problem of something for nothing is solved: Somehow amidst this kind of reasoning questions about the problematical nature and origin of physical laws that can “create ex nihilo” in this way just get quietly dropped. In any case it is not at all clear to me that physical laws and the material substrate which they describe can be meaningfully separated. The notion that the laws of physics play the role once played by the “Word of God” by bringing things into existence suggests that Christian theology is deeply rooted in our culture and is subliminally present in the thinking of Westerners; even atheists.

2) What caused the Universe?

I don't know. In fact, I'm not even sure what you mean by "cause." I'm told by experts in the field of cosmology that there's no need to invoke a supernatural being to explain the origin of the universe but if you want to believe in a deist god then that's all right by me.

I agree with Larry’s view that the meaning of the word “cause” is nebulous: Understood in the very day to day “domino effect” sense of one thing interfering with another in a sequence of events is not very helpful when viewed in the light of physics whose most general explanatory structures are mathematical constraints rather than rules of how effects are transferred from one object to another. Larry’s passing on the question of “ultimate causes” to cosmology experts is OK, but we happen to know in advance about the logical limits of what cosmologists can ultimately achieve – namely, an elemental kernel of brute fact, the terminus of descriptive science. If physics ever reaches this point then we can say that physics has a complete theory, but only in a descriptive sense; for in a logical sense physics cannot avoid a final incompleteness.

What Larry seems not to have conceived is that any truly supernatural being would be over and above the cosmos and thus would not appear as an auxiliary player adding his 100 cents worth of interventional cause and effect every now and then; rather such a being would be a present tense continuous agent in the creation and sustenance of cosmic form. So if physics eventually provides a complete description of the cosmos, then I don’t expect such a being to appear in the final theory: Physics, after all, is about succinct descriptions and not about the deep philosophy of origins. If a description of the universe included what Larry refers to as “supernatural” beings, I would conclude that they are players very much embedded in the cosmos and would therefore be very “natural” beings like little green men or something; entities that in the past would have classified as the spirits in an animistic world view  The intellectual grasp of the concept of God has much more to do with the reflexive philosophical frame of mind that must be adopted when trying to think beyond physical description to those philosophically diffuse meta-questions about ultimate origins.

3) Why is there regularity (Law) in nature?

I don't know. That's not my field.

Nicely side stepped; and just as well because in the context of physical descriptive logic such questions are unintelligible: How do we formulate a law which explains regularity in nature when that law itself must exploit a presumed regularity to be an effective explanatory object? Self referencing questions like this are not going to be intelligible let alone answered unless we are prepared to get philosophically reflexive.

Summarizing. We have, then, two very different perspectives here. Johns is saying that the kernel of fact needed to describe the universe (which must include the phenomenon of life) cannot have a complexity that is reducible to anything smaller than an “elephant sized” object. He is therefore anxious to talk up the notion of the conservation of complexity because surely big elephants are harder to explain than simple objects like mathematically elementary spheres. Larry Moran, on the other hand, probably believes in the descriptive reducibility of life in terms of physics; a point of view which I tend to support (But I have to admit it: As a great fan of physics and computer programming I could be biased. For this reason I’m prepared to entertain the view that life may be a second creative dispensation; that’s why I take seriously the work of people like Dembski and Johns). But the trouble with Larry is that he is either unaware of, or waves past the deeper questions about the ultimate origins of even simple objects (as can be seen from his responses above), questions that demand some philosophical reflexiveness.

In some areas the anti-theists and the anti-evolutionists appear to have common philosophical assumptions: I suspect that both parties share the view that as explanatory objects get more elemental, perhaps to the point of seeming to be trivial or aleatory, then a stage is reached where it is felt that no deeper reflexive explanation is needed. Thus the anti-theists feel the need to minimize the existential question by maximizing the triviality of the fundamental explanatory objects. People like Larry Moran can then wave those objects through the passport control of the critical faculties without making any probing enquiries. So, given this sort of behavior amongst anti-theists it’s no surprise that the anti-Darwinians want to keep things as irreducibly complex as they can; you might be able to smuggle a ball through passport control, but an elephant is a different matter. In my view, however, the degree of complexity of a contingent object is irrelevant to the deeper questions of its existence; the existence of a simple sphere is just as hard to explain at the philosophical level as an elephant.




Saturday, October 16, 2010

On Origins

I am currently studying a paper publicized in this post on Uncommon Descent and hope eventually to give my verdict on this blog. The paper is written by Richard Johns and concerns the limits of “Self Organisation”. I notice that Johns eliminates evolution as a separate category by classifying it as a “self organizing” process. That’s a point in his favour: It underlines the fact that if evolution is to work it requires the right mathematical conditions – it’s not organization for free as some seem to think. In any case I dislike the term “self organization”; it sounds too much like “self-creation” which is probably an equally if not even more dubious concept. Johns’ paper certainly looks very interesting.

I have also had a request to provide an opinion on another YEC cosmology that attempts to solve the star light travel time problem. This request has came via a correspondent called age2age who made the request when he commented on my blog post here. I won’t be releasing details on this matter, however, unless age2age gives permission. There's nothing like a bit of secrecy to spice things up. Let's face it, intrigue sells. (But I bet it won't be me that's making the money)

Furrowed brow: I've got my work cut out at the moment!