Friday, June 25, 2010

Back of the Envelop Mathematical Model Number 2: Bayes' Theorem.


Bayes; a Man of the Cloth.

Some years ago I came across an argument for the existence of God that used Bayes' Theorem. In fact the theorem seems to have been associated with arguments for the existence of God from day one; for example see the end of the wiki page on Bayes' Theorem. Before I can consider this argument we first need to get a handle on the theorem.

A straight forward derivation of Bayes' Theorem can be provided using a frequency understanding of profanity probability. To this end we imagine we have a total of T items. Let these T items have properties A and B distributed over them. Let the number of items with property A be F(A) and the number of items with property B be F(B).

Now, there may be some items that possess both properties A and B. If we take the set of items that possess property B, then let the number of items in this set possessing property A be represented by F(A|B). If we now take the set of items that possess properties A, then let the number of items in this set possessing property B be represented by F(B|A).

These quantities can be pictured using the following Venn diagram:

One fairly obvious deduction is that:

F(A|B) = F(B|A)

Thus, given these quantities we now imagine that we throw the T items into a bag, agitate them, and then select an item. If we assume a priori equal probabilities, then using the Laplacian definition of probability we then have probabilities P(A), P(B), P(A|B) and P(B|A) and Bayes Theorem follows as below:



The latter expression is Bayes' Theorem. Notice the symmetry of the proof and moreover the symmetry of Bayes equation; as one would expect given that A and B are interchangeable. I have arranged the derivation in order to bring out this symmetry.

Friday, June 18, 2010

Polarisation, Polarisation, Polarisation!

Some of us are non-plussed by an escalation in attempts to construct water-tight arguments

I have only just seen this post on Uncommon Descent, a post which came out in April. I am very gratified by the generally conciliatory tone of the post. The post was a response to Karl Giberson of Biologos who the post claims has “offered an Olive Branch to ID”. Biologos is the Christian think tank founded by Francis Collins, an affiliation that identifies itself with Theistic Evolution. In turn Thomas Cudworth, who wrote the post for Uncommon Descent, makes some very positive overtures. Moreover also he makes some significant statements:

William Dembski, for example, has said that ID does not in itself require gaps in natural causality, and therefore does not rule out a wholly natural process of evolutionary change, provided that the natural process is understood not as chance-driven but as designed. (I really wish Dembski would make his position here clearer - TVR)

ID is compatible with the view that new information was “input” into the biological realm at various points, e.g., at the origin of life, during the Cambrian explosion, at the origin of man, etc. But it does not require such a view.

All that ID insists upon is that the integrated complexity observed in biological systems is in important respects the product of design. The means by which the design was brought into nature – via a front-loaded scheme that operates naturalistically, via subtle indetectable steering, or via “spot miracles” to fill in the “gaps” – are not part of design theory per se.

It is precisely because ID focuses on design detection that it is compatible with many different views of how biological order emerged. Thus, ID is a “big tent”, which can embrace people who don’t accept macroevolution at all, and also those who accept chemical and biological evolution “from molecules to man”. What unites all ID proponents is not a particular account of origins, but a rejection of non-design in favor of design.

TE as such says nothing either way about the detectability of design, and ID as such says nothing either way about the occurrence or non-occurrence of evolution. Therefore, neither group needs, on definitional grounds, to deny the core belief of the other.
It is only those ID people who insist on rejecting evolution on principle, and only those TE people who insist that God’s design must not be detectable, that have no common ground.

In sum, I second Dr. Giberson’s proposal for a new exploration of what ID and TE have in common, rather than what separates them. I think the best TEs and the best ID people have a strong sense of design in nature that is not merely a subjective impression but which points beyond nature to a designer of nature.

Sounds, good to me; it seems we have here a basis for reconciliation between Uncommon Descent and Biologos. So why can’t they get on? The answer, according to Cudworth: Polarisation, Polarization, Polarization!:

But in between, there is an overlap zone, which I think that neither TEs nor ID people have fully explored, because of reckless past charges on both sides which have generated great mutual distrust.

And there is no better example of polarization and mutual distrust to be found than in this recent post on UD where it’s back to the belligerent business of evolutionists slugging it out with anti-evolutionists. Cudworth’s post, it seems, is already water under the bridge, soon forgotten.

Part of the deeper problem here may be the hold the philosophy of deism has on our culture. My theory is that both evolutionists and anti-evolutionists have to consciously resist a compulsion toward a default deism. This philosophy suggests that if the patterns displayed by the cosmos are mathematically describable then it follows that they are autonomous “mechanisms” in need of no maintenance and sustenance. Ironically even the most out and out anti-evolutionist seems hold this view: In consequence (s)he has a pressing spiritual need to propose divine “interventions” in order to prove the presence of God in what (s)he otherwise percieves to be a self sufficient machine world of mathematical pattern.

STOP PRESS 20/6/10
In a post headed “Theistic atheism –oops I meant theistic evolution” Denise O’Leary quotes a fellow anti-evolutionist journalist:

To put it in terms of an equation, when atheists assure us that matter + evolution + 0 = all living things, and then theistic evolutionists answer, no, that matter + evolution + God = all living things, it will not take long for unbelievers to conclude that, therefore, God = 0.”

She goes onto say:

Right, exactly, that is the project of “theistic” evolution, so far as I can see. Helping theists get used to a world run by atheists and their values, while still hollering fer Jesus irrelevantly somewhere.

All good polarizing stuff that’s bound to further alienate the theistic evolutionists, thus illustrating my point. Moreover, we have a fine expression of the essentially deistic paradigm that some people are working to; namely the “equation” : matter + evolution + God = all living things. Note that this portrayal of the theistic evolutionist’s position conceives God’s work to be something different from “natural forces” represented by “matter + evolution”. The Christian anti-evolutionist, who may still have a subliminal belief that “natural forces” have at least a nominal autonomy reads “evolution + matter” as an upstart usurper of God’s sovereignty. (S)he thus wishes to minimize the “natural” terms in the equation by putting evolution+matter = nothing very much. But this is qualitatively little different from atheists setting God = 0. Deists and interventionists hold a common philosophy that is only different in degree but not in quality.

As I once said on this blog: It is ironic that those who are so vocal about believing in "interventions" support a philosophy that has a close relation with deism: "N interventions" very easily turns into "Zero interventions" when faith falls away and N slides toward zero. If we have to use these mathematical metaphors a better one to my mind than these simple linear equations is to think of God as a “zero point” operator i.e. G(0) = natural history.



Business as usual on the polaris front

Wednesday, June 16, 2010

Problems in Young Earth Creationism: Star Light Travel Time

Young Earth Creationists have an astronomical problem on their hands; to say the least!

Young Earth Creationism insists that the Earth is no more than 10,000 years old; in fact the version of Creationism represented by the web site Answers in Genesis suggests an even shorter time span of around 6000 years. This view, needless to say, immediately creates an astronomical issue: Given that the stars and galaxies have distances well in excess of 10,000 light years Answers in Genesis need to explain how the light from distant stars could have reached the Earth in less than 10,000 years.

Answers in Genesis, to their credit, do at least attempt to tackle such problems. In contrast I think you will find that there are some hyper-fundamentalist Christian groups out there who take it for granted that their own interpretations of the Bible are the very words of God and for them it’s a case of “just believe” and hang any difficulties with the profanities of science. Thus, any move to probe their interpretations is regarded as an affront, tantamount to blasphemy. It is impossible to engage with these cognitively quarantined Christians. At least AiG are not in this category; they are prepared to dabble in some science. Let’s be thankful for small mercies.

The contemporary Young Earth Creationist movement owes much to the publication of the book “The Genesis Flood” by John Whitcomb and Henry Morris in 1961. It is in this volume that we find an early attempt to address the problem of star light. On page 369 the plain interpretation that light from distant stars and galaxies must have taken far longer than 10,000 years to reach us is referred to as a “contention”:

But this contention of course begs the question. It constitutes an implicit denial that the universe could have been created as functioning entity. If creation has occurred at all then it is reasonable that it would be a complete creation. It must have had an “appearance of age” at the moment of creation. The photons of light energy were created at the same instant as the stars from which they were apparently derived, so that an observer on the Earth would have been able to see the most distant stars within his vision at that instant of creation.

In other words according to Whitcomb and Morris photons of light were created in mid flight, giving the false impression that they originated from those distant cosmic objects. Perhaps sensing that this concept compromises the integrity of God’s created work Whitcomb and Morris hedge by mooting another idea that I have seen bandied about in Young Earth Creationist circles; namely, variable speed of light or “VSL”. Whitcomb and Morris promote a paper that questions special relativity and conclude that because such ideas can be submitted in scientific circles then this demonstrates:

… that astronomy has nothing really definite as yet to say about the age of the universe.

The approach here is the tried and tested science of negation, a method very common to the YEC movement; do all you can to muddy the waters of science by attempting to show that it is all speculative, inconsistent and highfalutin theory and then leave the scene, returning uncritically to your dogmatic interpretations of Biblical texts. If all else fails you can always retreat into the view that the Cosmos was conjured up as seen, “just like that”, 6000 years ago, a view that is all but irrefutable; for any observations to the contrary can be put down to mere appearances.

Of course it is very easy to take the view that science, which necessarily embodies the use of working assumptions and/or postulations whose veracity makes the world providentially knowable, cannot be trusted to tell us anything. This negative approach is unlikely to be applied by YECs to their Biblical interpretations, interpretations which also necessarily rest on the working assumptions needed to make the Bible comprehensible. However, when YECs turn from negative criticism to positive theory construction we can apply similar standards of criticism to their work. Therefore it was with great interest that I found an attempt on AiG to provide a scientific rationalization of the star light problem. (See here and here).

Firstly, it seems that AiG have moved away from the idea of a variable speed of light. On this page on their web site they warn Christians about using “far out claims”, one of those “far out claims” being the very idea that the speed of light has drastically changed:

We can speculate about a large-scale change of light speed in the past, but evidence is lacking. In addition, any alteration of light speed would affect several other constants of nature, but evidence of these changes is also lacking.

Young Earth Creationists always have the option of opting out of science by declaring that God conjured up the Cosmos, as seen, “just like that”, 6000 years ago. Such a concept is not very amenable to scientific probing and it is therefore no surprise that YECs who hold such views are not taken seriously. But it seems that the YECs at AiG want to show they are willing to put their name to at least some semblance of scientific inquiry. So rather than appear to be just another insular religious ghetto of fideist ultras they are prepared to get out on their bikes and look for some scientific kudos. Therefore on the links I have provided, AiG put forward for consideration the ideas of Russ Humphreys (See picture above). Make no mistake about it, this is dangerous ground for AiG; it means that they are coming out from under the cover of ad hoc creative conjuring into the line of fire of scientific inquiry. No wonder Christian fideists proactively rail against this sort of activity as an unspiritual courting of worldly science. AiG have a lot to lose.

In a later post I’ll have look at Humphreys’ ideas.

Sunday, June 13, 2010

I’m Checking this One Out

I have been following with great interest the posts on Robert Sheldon’s blog, a blog found on the neo-conservative web site Townhall.com. Sheldon’s ideas about OOL interest me and I have added comments to some of his posts. Some of those comments can be found here, here and here.

Sheldon’s right wing sympathies also interest me as they call attention to the strange complimentary inversion of philosophies that exist between the far right and far left: Those on the far left are likely to believe that the decentralised algorithms controlling matter, contrived with no apparent foresight, have created the organized complexities of life; but they are less inclined to allow decentralized processes to rule the socio-political realm where instead centralised “big government” planning is preferred. In contrast those of the far right often incline toward the view that very non-local processes, processes with foresight and intelligence, have planned and created life; but they do not want to see a centralized planner in the realm of the socio-political and instead believe that the localized unplanned decisions of the market place lead to an ordered and free society.

Sheldon’s take is particularly interesting because it is satisfyingly anomalous, idiosyncratic and exotic. Moreover it shatters the mold: His position on the creation of life looks to be intermediate: If we could quantify intelligence with a quantity I shall call “E” then I interpret Sheldon’s theory to be the equivalent of postulating that “evolution” can make jumps of intermediate value of E. This entails a process of “evolution” that would look a little bit like human history, where the limited quantum of human intelligence beats a path through time. Sheldon might just possibly be onto something.

Sunday, June 06, 2010

Dembski, McIntosh and “Evilution”.

There is a rumour that the Flat Earth society have sided with William Dembski.

This post on Uncommon Descent publishes the abstract of a paper by Andy McIntosh. The paper, according to William Dembski, is about the “thermodynamic barriers” to Darwinian evolution. Here are my comments on various parts of the abstract:

McIntosh: The theory of evolution postulates that random mutations and natural selection can increase genetic information over successive generations.

My Comment: No. Assuming the concept of “information” here is similar to that used by Dembski (that is –log[p]) then evolution, as currently concieved is not a process that creates of information. As I have tried to show in this blog Evolution converts information from one form of information into another: It does not pretend to transform an absolute improbability into a realistic probability. Being an anti-evolutionist McIntosh is likely to be anxious to cast “evilution” into the mold of a pretender to the throne of information creator.

McIntosh: By [evolution] it is proposed that a particular system can become organised at the expense of an increase in entropy elsewhere. However, whilst this argument works for structures such as snowflakes that are formed by natural forces, it does not work for genetic information because the information system is composed of machinery which requires precise and non-spontaneous raised free energy levels – and crystals like snowflakes have zero free energy as the phase transition occurs.

My Comment: No. A raised free energy is not in and of itself relevant as a road block to evolution. Osmosis is a process driven by an increase in entropy and yet it is capable of locally raising the free energy of a column of liquid in as much is it may become higher and thus contain more potential energy. Much more pertinent than these raised free energies is the fact that the configurations of genetic information are, when placed in the right biological context, “meaningful”. As we know purely natural forces ensure that a growing organism annexes increasing amounts of material into a state of raised free energy. In this case the information is transformed from preexisting biopolymers into the newly annexed material. Thus, the crucial issue is NOT raised free energies as such but the origin of the information that succeeds in propagating from one form of reification to another.

McIntosh: …. biological structures contain coded instructions which, as is shown in this paper, are not defined by the matter and energy of the molecules carrying this information. Thus, the specified complexity cannot be created by natural forces even in conditions far from equilibrium. The genetic information needed to code for complex structures like proteins actually requires information which organises the natural forces surrounding it and not the other way around – the information is crucially not defined by the material on which it sits.

My Comment: No. Once again I see hints of the common view of anti-evolutionists to expect to see information reified on some material substrate and thus they wrongly conclude that because they can’t find such a reification then that information doesn’t exist in our divinely selected cosmic set up. Anti-evolutionists refer to the processes of that cosmic set up as “natural forces”, a term they use pejoratively in the sense of “mere natural forces”. Trouble is, natural forces generate life every time life is propagated in the womb or egg. The issue is not whether natural forces can generate life, since clearly they can, but just where these natural forces get their information from to do so: As I have suggested in this blog this information could conceivably be found in the platonic world of configuration space and the selection of the function T(L). ( See my last but one blog )

McIntosh: The fundamental laws of thermodynamics show that entropy reduction which can occur naturally in non-isolated systems is not a sufficient argument to explain the origin of either biological machinery or genetic information that is inextricably intertwined with it.

My Comment: Yes, I completely agree. But contrary wise, the law of overall increasing entropy is not sufficient argument to eliminate evolution from the inquiry. This is because entropy is a measure of the number of microstates consistent with a macrostate. It is therefore a quantity that is measured given the ordering effects of the physical laws constraining the system and therefore it is not very sensitive to measures of absolute disorder; a system can move to its most disordered state in terms of entropy, and yet because of physical constraints, still be a very ordered system in absolute terms. It is ironic that it is precisely the specter of an intelligent designer lurking behind the cosmic scene that prevents the elimination of evolution from this inquiry: The cosmic set up could be a cleverly selected design option of an intelligence capable of conceiving the abstract function T(L).

McIntosh: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.

My Comment: Yes and no. ‘Yes’ because evolutionary information may be located in the immaterial and abstract space pertaining to the function T(L). It is this function which may constrain thermodynamics to such a degree that increasing entropy (which ironically becomes the diffusional dynamic that hunts for life) is capable of not only locally raising free energy levels (as in osmosis) but of also of raising the probability of the right configurations. The function T(L) transcends the decay of the second law. ‘No’ because, once again, McIntosh introduces the same lame materialistic aunt sally of “natural forces” as the “informationless” elemental pretender to the Creator’s crown. McIntosh and his fellow anti-evolutionists can therefore picture themselves as crusaders fighting the good fight against the straw man of “naturalism”.

Summing up Comments: From McIntosh's abstract alone things are not looking good for his paper . I’m not convinced that people like Dembski and Mcintosh are able to eliminate status-quo evolution from the inquiry for the right reasons. Instead they continue to try to short cut the issue by attempting to cast the debate into the dualistic mold of “natural forces” vs. the Creator; an approach that fails to do justice to the fact that those natural forces are themselves the Creator’s creation. I suspect that if I read McIntosh’s paper I would find nothing technically wrong it: The devil is likely to be in a subliminal dualistic “naturalism vs. creation” spin that he would impose on the argument, a spin we can start to detect in the abstract. Let me make it clear that none of this necessarily means that evolution is actually a viable process given the particular physical regime of our cosmic theatre – the constraint embodied in T(L) may not be sufficiently great for the computation engine of entropic diffusion to return a realistic probability for the formation of life.

Many ID theorists are very immersed in the politics of the evolution vs. anti-evolution debate and from their embattled perspective "Intelligent Design" and "anti-evolutionism" are interchangeable terms. For them “evolution” and intelligent design are an irreconcilable dichotomy and this in turn is a reflection of a polarized conceptual filter that sees the whole debate through a "naturalism vs. creator" paradigm. Therefore I have grave doubts about the ability of McIntosh and Dembski to handle the philosophical and theological issues correctly. But for some followers lacking in expertise Dembski and McIntosh are the people Christians are supposed to get behind. In this thread on Network Norwich & Norfolk an anti-evolutionist correspondent started pushing McIntosh’s view on Thermodynamics. (see towards end of the thread). The said correspondent clearly had little understanding of the subject and presumably naively thought McIntosh had enough authority to end the argument, which of course he doesn’t. This correspondent was, I believe, a young earth creationist; evidence that the anti-evolutionists are a broad and uneasy coalition of young earth and old earth Christians, perhaps united by the abuse they get from the scientific establishment. It is not surprising, then, that when one of their number succeeds in getting a peer reviewed paper published they hang out the flags.

This reverend gentleman is clearly not connecting with his passions; he's far too balanced

Polarisation passion feeds. Passion polarisation breeds. Polarisation is passion's cause, for crusade and holy wars.


STOP PRESS
Interesting is this post by Denise O'leary on Uncommon Descent where she accuses Biologos, the Christian science think tank founded by Francis Collins thus:

I suggest that the real action now is with groups like Biologos, aimed explicitly at persuading Christians that they can maintain an intellectually respectable faith without believing that the universe (?) or life forms show evidence of design.

As I said above "For them evolution and intelligent design are an irreconcilable dichotomy...". Christians who believe in evolution now stand accused by O'leary of not giving the Divine designer credit and instead of believing in "natural forces". Little wonder that I am so loathe to side with these anti-evolutionists; in fact, how can one side with them when there is a real danger of being spiritually incriminated in this way?

Tuesday, June 01, 2010

Mathematical Mysteries


Someone emailed me asking if I could make sense of a couple of mathematically oriented matters.
The first was the strange pervasive quality of the “Golden Number”, as exemplified by this YouTube piece:
http://www.youtube.com/watch?v=PjrK96wasDk
The second was some of the amazing symmetries and regularities one can find in arithmetic operations. For example:
1 x 8 + 1 = 9
12 x 8 + 2 = 98
123 x 8 + 3 = 987
1234 x 8 + 4 = 9876
12345 x 8 + 5 = 98765
123456 x 8 + 6 = 987654
1234567 x 8 + 7 = 9876543
12345678 x 8 + 8 = 98765432
123456789 x 8 + 9 = 987654321
Here are my comments on the issues raised:

1. The Golden Number:
I pondered this question as far back as 1968 when my maths teacher gave me a colourful coffee table book on maths. It had a section on the Golden ratio and the amazing number of seemingly disconnected connections in which it arose. Why was this theme running through nature?
The Golden number arises where ever one finds the following quadratic equation:
x^2 - x -1 = 0 (read x^2 as “x squared”)
This equation returns a solution for x of value G, where G is the “golden number” = 1.618….etc. Thus any phenomenon where this equation is entailed will return a golden number. This immediately explains why the value of G appears in some contexts, Viz:
1. For example, a requirement of snail shells is that during growth of the shell the snail need not adapt to any fundamental changes in the distribution of the weight of the shell – hence, big snails look exactly like small snails; that is snail shells are “self similar”. This requirement, it can be shown, entails the above quadratic and therefore the value of G.
2. Another example is the proportions of A4 paper. These proportions are such as to fulfill the requirement that cutting a piece of paper in half produces two pieces with the same proportion as the original. Once again, given this requirement of proportionate similarity, the same quadratic arises, thus entailing G.
3. It is also clear why the Fibonacci series returns increasingly better approximations for G as the series progresses In this series we require n1+n2 = n3. That is, we generate the next number n3 in the series by adding the two previous numbers in the series. Now, we are interested in the ratio of n3 to n2; in other words the value n3/n2, because this value, as the series proceeds, tends toward G.
Now: n3/n2 = (n1 + n2) / n2, by the definition of the Fibonacci series.
But: (n1 + n2)/ n2 = 1 + n1/n2
Thus: n3/n2 = 1 + n1/n2
Or: n3/n2 = 1 + 1 / (n2/n1)
In this latter equation n3/n2 is a better approximation to G than n2/n1. This means that the left hand side of the last equation converges to G as the series advances. Therefore, for large numbers n3/n2 ~ n2/n1. Therefore we can write the last equation as:
G = 1 + 1/G
Which transforms to the quadratic we are looking for:
G^2 – G – 1 = 0.
The ubiquity of G, or rather the quadratic x^2 – x – 1 = 0, seems to be on par with other themes running through nature, like the Boltzmann distribution, the Gaussian bell curve, the power law, and even the value of pi (see my recent blog on the subject here http://quantumnonlinearity.blogspot.com/2010/05/back-of-envelop-mathematical-model.html). So if the ubiquity of G is mysterious then so are these other mathematical forms. We trace back the existence of these mathematical themes to their origins in very similar starting conditions, conditions which thus lead to isomorphic mathematical outcomes. But once we spot the isomorphic underlying conditions the existence of the theme seems less mysterious. For example, random walk underlies the bell curve; consequently where ever we find randomness we will not be surprised to find a Gaussian bell curve. Likewise, any phenomenon governed by the quadratic x^2 – x – 1 = 0 will give us a golden ratio. But having said that I have to admit that in the case of G, the underlying isomorphism’s are often not very clear and thus the presence of G can be rather mysterious; for example I don’t know why the genetic algorithm generates logarithmic spiral patterns in plant flowers – perhaps it something to do with growth self-similarity as in the snail shell. Perhaps a biologist could tell us.
Actually, one might claim that randomness (which is implicated in the Gaussian bell curve) is itself a very mysterious phenomenon: Koestler put it well in his book “The Roots of Coincidence”. After remarking on the remarkable fact that such diverse things ranging from nineteenth century German soldiers being lethally kicked by their horses to “Dog bites man” reports in New York, all obey the same statistical curves, Koestler goes on to say:
But does it (probability theory) really lead to an understanding? How do those German Army horses adjust the frequency of their lethal kicks to the requirements of the Poisson equation? How do the dogs in New York know that their daily ration of biting is exhausted? How does the roulette ball know that in the long run zero must come up once in thirty-seven times, if the casino is to be kept going? The soothing explanation that the countless minute influences on horses, dogs or roulette balls must in the long run "cancel out", is in fact begging the question. It cannot answer the hoary paradox resulting from the fact that the outcome of the croupier's throw is not causally related to the outcome of previous throws: that if red came up twenty-eight times in a row (which, I believe, is the longest series ever recorded), the chances of it coming up yet once more are still fifty-fifty.
It was this passage that help spur me into studying randomness so intensely in the 1970s.
In my opinion whichever way we look we find a profound mystery; namely, the profound mystery that contingent things – that is, things with seemingly no logical necessity or aseity – simply exist. The issue I have with the video you linked to is that in common with much transatlantic “Intelligent Design” thinking, only some phenomena are singled out and identified as betraying signs of design; the insinuation is that some things are somehow “natural” and therefore do not need Divine design, creation and sustenance. But to me it’s either all “natural” or its all “supernatural” – I’ll take the risk and plump for the latter as the sheer contingency of existence, no matter how mathematically patterned, will forever remain utterly logically unwarranted (what I refer to as the great “Logical Hiatus”), and thus in my view a revelation of God’s creative providence.
The problem we often have with mathematics is that sometimes it looks like a kind of magic: It reminds me of that “think of a number” game where somebody in due course appears to “conjure up” the number you first thought of – it may at first seem that some sort of underlying magic is at work but in fact it is the underlying logic unperceived by us whose consequences surprise us. Likewise, the ubiquity of G at first looks very mysterious, as if some divine magician has come along and tweaked the universe here and there thus betraying his presence by way of this magic number. It’s all very reminiscent of Arthur C Clarke’s “Tycho Magnetic Anomaly” in 2001 Space Odyssey, where a mysterious and yet clearly artificial object is found on the Moon; something very “unnatural” is discovered in a vista of “naturalness”. This 2001 scenario is basically the transatlantic paradigm of intelligence design; that is of a God who is manifest largely in various inexplicable logical discontinuities found in the cosmos. But in my opinion this is hardly the right model for God: The Judeo-Christian view of God is not one of an entity who is just responsible for the overtly anomalous: For as soon as we recognise the underlying and unifying mathematical logic at work in the cosmos we no longer see a patchwork creation punctuated by arbitrary “supernatural” anomalies. Instead these underlying logical themes open out into an even bigger mystery; the mystery of a much more totalizing kind of intelligence; an entity of an entirely different genus to the part-time intervening tinkerer who is on a par with alien beings.

2. Mathematics as a Human a Construction?
I’m not so sure that it is right to think about mathematics as a purely human invention. I’m slanting toward the view that “invention” is kind of “discovery” and vice versa. In platonic space every structure/configuration exists in the sense of being a possibility awaiting realisation – it is then down to us to drag that structure out of potentiality and to reify it in material form as either symbols on paper, a computer program, or as concepts in our head.
In fact consider the case of the story writer. As I have said before an average size book has about 30 to the power of a million possible arrangements of text characters. Think of the creative act of story invention as one of dragging out into the real world a single combination of text from those many possible combinations! Achieving such a feat presupposes that the combinatorial hardware exists to take up one of those many possible platonic states. Thus, the “invention”, “discovery” or “reification” of one of those combinations is conditional upon a pre-existing substrate and is thus very much bound up with the “hardware” providence has supplied in the first place in order that these platonic entities may be reified.

3. Regularity and Symmetry in Mathematical Patterns.
Here we have some mathematical operations that have produced patterns of symmetry and regularity – in other words patterns of “high order”. Now let’s think about the opposite; namely mathematical operations that generate disorder. To this end take a number like 2455, square it and then take out the middle digit. Take the first four digits of the square and square that. Once again select the middle digit. Repeat the process many times. One finds that the sequence of selected digits looks very much like a random sequence of numbers (Probably only approximately so) with no discernable pattern. Hence, mathematical operations can generate both order and disorder.
I think of mathematical operations as physical systems, systems that manipulate tokens; basically they are algorithms that can be programmed on a computer. I’m not sure whether or not this algorithmic view of maths is valid for some of the more abstract meta-mathematical thoughts (Like Godel’s theorem), but it is clear that a large class of mathematics comes under the heading of algorithmics and certainly the examples you give are algorithmic in as much as they could be generated on a computer with the right programming.
Anyway, in the context of algorithmic mathematics the ordered patterns you observe make sense to me. One way of making sense of these patterns is to start with the observed patterns themselves rather than the process that generates them. These patterns are highly organized. Highly organised patterns are readily compressible because they don’t display the variety and complexity of disordered patterns and thus they don’t contain a large amount of data. Hence they can be “compressed” by expressing them as simple rules of generation. Expressing such a pattern as the product of some kind of rule based mathematical procedure is a way of compressing that pattern. It is therefore not surprising that elementary arithmetic procedures generate highly organized patterns because those procedures are in effect ways of compressing the ordered patterns they generate. Thus, the symmetry and order you observe is really not surprising. What I have more difficulty with is the fact that simple arithmetic procedures, as we have seen, can also “compress” some disordered sequences in sense that they can generate disordered patterns in a relatively short time. This actually created a paradox for me and I proposed a solution in one of my blog entries – see this link: http://quantumnonlinearity.blogspot.com/2009/10/proposed-problem-solution.html

Tuesday, May 25, 2010

Bend it, move it....

Young Earth Creationists bend the second law of thermodynamics.

On Uncommon Descent the erroneous idea that the second law of thermodynamics contradicts evolution is still being peddled (See here and here). The reason why the second law is consistent with evolution is not difficult to understand. Trouble is, the passionate anti-evolutionists have a strong contingent of Young Earthers in their midst. The latter, going right back to the days of Morris and Whitcomb’s book “The Genesis Flood”, have committed themselves to this fallacy and now they cannot budge from it without becoming a laughing stock. I have tackled this subject several times on this blog, but there is no harm dealing with the subject again from a slightly different angle.

In physics the entropy of a physical system given its stipulated macroscopic conditions is an increasing function of the number of microscopic states consistent with those conditions - the latter number is referred to as the statistical weight of macrostate. For example, if we stipulate that the system is a gas with stated pressure, temperature and density then the entropy will be determined by the number of ways such a system can be realized. If we stipulate that the system is crystalline then it is fairly obvious that the entropy of such a system is relatively low as there are, relatively speaking, not many microstates that realize such a configuration. More abstract macroscopic conditions can be conceived such as requiring that the system is proactively self perpetuating as in the case of living structures. Interestingly, it is clear that living structures are far more “disordered” than crystals in as much as the number of ways it is possible to exist as a proactively self perpetuating configuration are myriad compared to the number of ways a crystal structure can exist. Entropy as a quantity does not pick up on the fact that between the extreme ends of its rather undiscerning spectrum, there are remarkable structures that from an entropy point of view are not differentiated as particularly special. Another limitation of entropy as a quantity is that it only measures the statistical weight of macrostates given the constraints of the physical regime and these meta-constraints remain untouched by thermodynamic decay. Entropy is therefore not a measure of absolute disorder: It is possible to conceive of physical régimes that are so restricted that only highly ordered configurations are consistent with those regimes; in such a context entropy only provides a measure of those macrostates with the greatest statistical weighting. Thus increasing entropy does not necessarily imply a decay to absolute disorder.


That the second law of thermodynamics is not inconsistent with evolution becomes clear even if we start by assuming that the cosmos is the product of an intelligent designer. In fact, what now follows is an intelligent design argument against the YEC abuse of the second law of thermodynamics. If our intelligent designer is endowed with the level of omnipotence and omniscient intelligence usually associated with the Judeo-Christian deity, then given a construction set of parts such an entity would be able to conceptually assemble the entire space of configurational possibilities open to that set of parts. This configuration space forms a kind of manifold of points and a Judeo-Christian deity would be able to think about this manifold like a human being thinks about 3D space. As it stands, however, this huge platonic object is pretty dead and static. So the next step is to introduce some kind of physics in order to give it dynamism. To this end our Judeo-Christian deity could proceed to “wire” up this configuration space into a network of connections. Most naturally the metric of this network would recognize the fact that the manifold of configurations naturally forms a network of relations: These natural relations exist by virtue of the spatial relations resulting of the fact that some configurations are only separated by a small distance in terms of the incremental adjustments needed to turn one into the other.

So, now we have a manifold of points connected into a network by some kind of connection metric. But we still have a pretty static object. The next thing is to give this network a dynamic by assigning transition rates to the connections: This means that if the system is known to be in one configuration we can then work out the probability of it making a transition to one of its “nearby” configurations. The manifold now has a dynamic; that is, it has some physics: Given these transition rates the system will now move from one configuration to the next.

Let us represent this dynamic by the function, T(L), that maps the links represented by L to corresponding transition rates T. In its most straightforward form T will consist simply of a list that maps the links between the nodes of the manifold to their respective transition rates. Given the powers of our assumed deity, then it is clear that such a being has available to it an enormous number of choices on how to wire up this network and how to assign transition rates: In particular it would be quite within the powers of this deity to so wire up this network that it would move toward configurations that contain living structures. To achieve this the transition probabilities need not be directionally biased: The function T may form narrow channels of flow where no direction is favoured, but because these channels form such narrow bottle necks the configurations containing life would have relatively high statistical weight thus considerably enhancing the probability that such configurations arise as the system moves through configuration space. If within the specified channels of development the transition rates are isotropic then this implies that the motion within these channels is one of unbiased random walk. From random walk immediately follows the second law of thermodynamics; namely, that the system would tend to move toward those macrostates with the greatest statistical weight - which is all the second law tells us. Since T(L) puts a tight restriction on what is possible, the macrostates with the greatest statistical weight (that is, with the greatest entropy) are not necessarily disordered in absolute terms. Thus configurations containing living structures can evolve and yet the second law not be violated. The second law works within the constraint supplied by the function T(L) whose form is not subject to thermodynamic decay and is selected by divine intelligence to considerably enhance the probability of life arising. Here then is the rub for those who naively think the second law to contradict evolution: The above system would simply migrate towards its most probable macrostate, that is the macrostate with the greatest “disorder”, and yet if T(L) is carefully chosen by our super–intelligent deity these so called "disordered" states may contain what in absolute terms are the highly complex ordered configurations of living structures.


It all comes down to how the function T(L) maps transition rates to the links between configurations. There is actually nothing really profound here: Given the freedom in choosing any arbitrary T(L) Divine Intelligence is quite capable of contriving a network of transition rates in such way as to favour the evolution of life. The function T(L) effectively defines the physics of the system; that is, it tells us the probability of a system moving for one state to the next. However, it is a funny sort of physics as it simply takes on the form of a list of connections and associated transition rates. This list of connections will contain a high level of information on two counts:

a) It will be a very long list and thus in terms of its linear size it will contain a lot of information.
b) It will be simply one list of many, many possible lists and it will therefore be an extremely rare selection. If, assuming equal a-priori probabilities, we equate this rarity to a selection probability then the implied improbability will entail a very high level of information as defined by the expression for information, –log P. (Which is the expression ID theorist William Dembski uses)

But the profound and difficult questions are these: Is it possible to compress and encode the information in this list into a set of elegant laws? In fact, is our system of physical laws one such compression? I’m not sure I know the answer to these questions, but they are questions I have never seen properly framed let alone addressed on Uncommon Descent or in any of the papers I have read written by anti-evolutionists. Instead the anti-evolutionist stance has a tendency to encourage a spurious dichotomy between “naturalism” and intelligent design. Naturalism is the view that somehow elemental nature can go it alone, a view which is at the heart of atheism. However, no doubt unintentionally, the views expressed by the anti-evolutionists appear to promote the concept of naturalism: The anti-evolutionists who follow William Dembski loudly proclaim the virtues of their design detection science oblivious to fact that it is easy to construe this as suggesting that some things in nature are “designed” and therefore “artificial” and some things need no design and therefore are “natural” , uncreated by intelligence.* I’m sure that in their heart of hearts Dembski and his followers don’t intend this insinuation, but it is all too easy to read the anti-evolutionist thesis as setting up a dualist category of nature versus God. They have created a PR problem for themselves and this is indicated by the fact that Dembski feels the need to address it here.


The fallacious use of the second law by the anti-evolutionist lobby only serves to reinforce this false dichotomy between nature’s creative power and Divine creative power: Thus the anti-evolutionists who have an deep instinctual fear of evolution feel the need to have ready a killer proof of the superiority of Divine creative power over the much feared apparent creative power nature. But the truth is that the second law is no killer argument against evolution and in any case the apparent power of nature to create via evolution must ultimately trace back to the Divine ability to specify the function T(L).

Footnote
* Some problems are harder than others: The presence of a solution to a difficult problem may give an indication of the level of intelligence that solved it.

Saturday, May 22, 2010

Back of the envelop Mathematical Model Number 1: Power Laws

After a comment from Stuart about the scale invariance of power laws (See here) I pondered the subject a bit and decided I would do a post on it.

The ubiquity of the power law probably ranks it with the Gaussian bell curve. The latter arises whenever there is a random walk, a very general and common phenomenon. Another general curve is the Boltzmann distribution, an example of which can be seen in the way atmospheric density changes with altitude. But power law distributions differ markedly from the Gaussian and Boltzmann distributions in one important respect. Unlike the latter two, power law distributions don’t return well behaved means and variances. (See here)

The Boltzmann and Gaussian distributions contain negative exponentials and these create asymptotic cut offs which ensure that the integrals used to calculate averages and variances are finite. This cut off behaviour is essential given that both probability and energy are limited by conservation laws and finite resources. How then can we make sense of a power law distribution, like say the size of meteors, which if taken too literally would suggest that there are bodies out there of infinite mass?

Some back of the envelop theorising may help to explain this.

Some power laws seem to have their logical roots in the conventional concept of space constructed by taking the Cartesian product of the coordinates of this space. If an object in a Cartesian space has a size defined by some linear parameter x then that object will have a surface area or volume that will be some power of x. That is, the surface area is proportional to x to the power of p where p is a real number.


Taking my cue from things as diverse as interplanetary bodies and internet nodes, I envisage such objects being capable of attracting further material thereby growing in size. If this is the case then I suggest that the object’s surface area (or volume) is the parameter that determines its growth rate because it is via this surface area that the object interfaces with the “outside world”. If the object grows by the assimilation of material through the membrane of this surface then we might expect the growth of this object to be proportional to this surface area. That is, the rate of growth of the object, G, is given by:
... where k is a constant.

So, an object of size x is effectively ‘moving’ along the x axis with a ‘velocity’ equal to G. If the density of the distribution of objects on the x axis at point x is D(x), then the flow F(x) at point x will be given by:
Now, let us assume that the objects are coming into existence with constant rate at the lower end of the x axis. This means that when equilibrium is eventually reached the flow along the x axis will be a constant independent of x Therefore:Hence:


Given the assumptions I have built into this calculation we see a natural power law distribution in x that ultimately traces back to Cartesian dimensionality.

The above simple model really provides a starting point from which more sophisticated models can be contemplated and built. I actually feel rather unsure about the assumption that objects are envisaged to reside in a conventional Cartesian space that allows their surface area/volume to be calculated using a simple power law. In a manifold where nodes are connected randomly, rather than connected in sequence as in Cartesian spaces, the volumes/surfaces areas are an exponential function of the number of steps between the furthest nodes comprising the object. Notice also the assumption that objects can grow indefinitely – that is, it is assumed that there are no limits on the material available driving the growth of the objects. This, of course, may not be valid – or it may be valid only for a limited time. If the latter is the case then only during the period when material is available will the power law hold as a reasonable approximation.


Returning to my original query about how power laws, which don’t return convergent means and variances, can exist in a cosmos of limited resources, then it seems the answer is this: Power laws only work in open systems, that is in systems where there is an input from without. As long as that input lasts the system will move toward an equilibrium that displays a power law distribution. However, this power law distribution will only be approximate; in a cosmos of limited resources the input can never be maintained long enough for an absolute equilibrium to be arrived at. Thus we expect power laws, like geological lakes, to be a temporary phenomenon, eventually causing a maxing out of resources.

Saturday, May 15, 2010

The Gordon Ramsey Foul Mouthed School of Intelligent Design

The following quote comes from Hugh Ross’ web site here. Ross is an “Old Earth” Christian believer. However he doesn’t believe in evolution. Here is his take on the origins of life:

As Fuz and I described in our book, Origins of Life, the existence of liquid water conditions within a few limited refuges at intermittent times throughout 4.38 to 3.85 bya provides a superior explanation for the zircon and rock remains.5 This scenario leaves open the possibility that God intervened every time, or nearly every time, liquid water was present on Earth to create life. When that life was destroyed by a bombardment event, God simply waited for the liquid water to reappear to create life again. (This is why we used the word “origins”–as opposed to “origin”–in our book title.) In More Than a Theory, I suggest that God might have chosen this repeated origins-of-life strategy as a tool to jumpstart the chemical transformation of Earth’s atmosphere.

This strikes me as all rather anthropomorphic. In fact given that some people have likened the big bang to a kind of cosmic cookery and that anthropomorphisms don’t come any stronger than in the banter of TV chef Gordon Ramsey, let’s imagine it was Gordon who cooked up the hot big bang roughly 15 billion years ago. He then travels around his shiny new universe looking for spots suitable for a sophisticated experiment in chemical engineering (which is basically what cookery is all about, I suppose). Trouble is, as far as we know no spot for this work turns up for a long time; about 10 billion years in fact, when at last he discovers the Earth. He intervenes in its chemistry and cooks up some life wherever he finds the suitable ingredients in the primeval soups of its waters. Things are looking up, until suddenly:

“Oh sh*t” said God Gordon “I completely forgot about those f*ck*ng” meteors I created. They’ve completely wrecked my experiment. I’ll have to wait for another few million years before the conditions are right”

A few million years later….

“Right here goes one more time..tum..ti..tum…ti...tum.. ” (sploshing & pouring sounds at this point, along with the occasional clink of Pyrex) ….. There you go, life once again!”

A little later, guess what….

“F*ck! I don’t believe it! Those d*mn meteors have wiped out life again. I’ll have to start all over! If at first you don’t succeed try, try again!

So Gordon keeps at it until at last the late heavenly bombardments ceases and life gets a hold. But Gordon learns a lesson:

“This universe I’ve created is cr*p; it keeps doing things that I don’t want or expect. That’s the last time I create through secondary causes because I can never tell when it’s going to f*ck up. No more Mr. Deist; from now on its going to be Mr. F*ck*ng Interventionist! Then I’ll know where I am”.

So with a bit of chemical tinkering here and there life gets going. The end of Permian extinction and the Cretaceous meteor strike are setbacks, but finally life on planet Earth flourishes. However, after nearly 5 billion years of intelligent design Gordon is in for the shock of his life:

“F*ck! F*ck! It, looks as though I’m back to square one again!

The news? Human beings have appeared on planet Earth along with their free will and weapons of mass destruction.

I think there is something seriously wrong with the above concept of deity. However, the comments that somebody left on my blog here come to mind...

Thursday, May 06, 2010

The Blind Asset Stripper

I’m continuing to work through a set of Adam Curtis documentaries that I received for Christmas on DVD. For the first time in my life I’m finding politics interesting, so Curtis must be doing something right.

The latest series I recently completed watching was entitled the “Mayfair Set”. It tells the story of the rise of free market economics in the UK and America. It introduces us to some of its major personalities on both sides of the Atlantic: Jim Slater, Tiny Roland, James Goldsmith, and Michael Milkin. These men made their fortune by taking advantage of the fact that companies are legally owned by their share holders. In most cases the share holders were passive owners uninvolved with the day to day running of the company, seldom looking beyond their share price. Their blinkered ‘bottom line’ view meant that they were unable to resist a good offer when they saw one, and were very ready to sell their shares at the right price, whatever the ramifications in the wider economic sphere. Enter Slater and co: It wasn’t long before these financially ambitious men owned a string of companies, replacing the previously passive ownership with a much more proactive monopoly shareholder who was prepared to “reorganize” the company in order to increase profits. The reorganization process, which they liked to think created leaner meaner companies, had a more pejorative description: “Asset Stripping”. By laying off workers and selling off assets immediate (and ephemeral?) end of year profits were retuned. However, as far as real long term productivity was concerned the effects of breaking up and selling off company assets was unclear. But in the meantime short term profits stimulated a stock market boom as share holders bought and sold.

That is the background. Whether or not the “asset strippers” helped to create a more productive economy rather than just exploiting a legal way of siphoning off stock market money into their bank accounts is not what I am going to comment on here. What I would like to draw attention to in the context of the evolution/ID debate are the following remarks by “asset stripper” James Goldsmith. They can be found in the third program of the “Mayfair Set” series:

Goldsmith on the “Harshness of Change”:

In nature there can be no continuity because there are predators. And in fact there was some game reserve set up by some well meaning people who said it is horrible that these animals should live under the constant threat of predators. Those animals subsequently became degenerate and died because predators are a necessary stimulant. If you eliminate predators in business and just create comfortable bureaucracies and monopolies with no predators you will have a dead industry and the prosperity of the country will shrivel away and your people will suffer infinitely more than by being subject to constant stimulation, threat and competition.

Goldsmith on corporate responsibility:

The sort of stuff senator Worth is talking about (that is, about corporate responsibility – ed), which is the pastoral America with a little company, a church and university which is going to be there forever and they don’t have to compete with anybody and that competition is awful , is a total mixing up of the difference between doing business and doing good. Doing business is what gives you the fuel to do good. Don’t mix them up. The bee doesn’t make honey because he is doing good. He doesn’t have the soul searching of “Am I doing good?”


What irony; the free market process, so beloved of the American religious right, being described in such folk Darwinian terms! Goldsmith is telling us that individuals need look no further than their own immediate productive interests and hey presto out of the hat pops an organized democratic society. (Although conspicuously, he says nothing about a just society). As in evolution only local and immediate gain need be selected for by otherwise relatively blind local market forces, forces that have no cognizance of the greater whole. Thus, no overall social perspective need be conceived and processed intelligently; the average investor need only act selfishly, looking after his own interests without a mind for the bigger picture. A bonus, according to Goldsmith, is that the investor can even think of himself as doing good; thus he can be both selfish and morally upright at the same time!


In some ways the picture sketched by Goldsmith is how societies inevitably proceed – local decisions are made that, due to limited prescience on the part of finite human intelligence, do not take into account the widest of contexts and long term ramifications. Thus, in a very general abstract sense technological societies evolve, quasi-blindly.

Starting here I did a series of blogs as a follow up to watching Adam Curtis’ documentary series “The Trap”

Wednesday, April 28, 2010

Polkinghorne: A Creationist and ID Theorist.

Last night I was at Norwich Cathedral where John Polkinghorne was giving a talk. I have published some pics to convey the atmosphere. Their quality is not good as my “point and snap” photographic technique struggles under poor conditions.

One of Polkinghorne’s theme (as usual) was of a universe “endowed with the potentiality” to generate life via evolution. During the Q&A session he described himself as a creationist who believes in Intelligent Design. This claim is entirely intelligible given that Polkinghorne believes the universe to have been deliberately “fine tuned” in order to be fruitful in its production of life.

However, what worries Polkinghorne about the term “intelligent design” also worries me: It has become (especially in North America) synonymous with anti-evolutionism. Thus, the insinuation is that theorists like Polkinghorne who are standard evolutionists must be advocating a life creating processes that is unconditional upon Divine Design. As Polkinghorne himself said, somehow the anti-evolutionists have posited “natural” processes in which God has no hand; that is, their objection to physics as the source of life is based on a subliminal feeling that physics is a “mechanical” or “natural” process that minds itself without the hand of God.

The contention here is not whether evolution is supported by common physics or not, but just who can consider themselves to be flying the flag of “intelligent design”. The anti-evolutionists think that they alone are flying that flag and that everyone else should come on side for God. The insinuation is that those who don’t are somehow in bed with “naturalism” and atheism.

The anti-evolution/evolution debate is an emotionally charged war zone where combatants need to know who to shoot at and who not to shoot at. As far as the anti-evolutionists are concerned Polkinghorne is on the wrong side and cannot be regarded as an ID supporter. It is therefore no surprise that in this polarized environment people like Polkinghorne tell us that they have little sympathy with the anti-evolutionist ID community.

But the category division between "goodies and baddies" is based on quite subtle criteria. In this post on Uncommon Descent a correspondent moots the idea that common descent with genetic front loading can be identified as an ID candidate even though the correspondent doesn’t hold this view himself. Why then can’t Polkinghorne’s evolutionary views, which if they are valid would inevitably entail a biased front loading, also be identified as an ID candidate? I suspect this has something to do with the side of the battle field he identifies with.


Christian flock: some think ID sorts out the sheep from the goats
and that Polkinghorne is a nave knave





JOHN POLKINGHORNE LECTURE NOTES
These are the notes I made on the evening of 27/4/2010
Qualia vs Formalism. Meta questions beyond science
  1. Why is science possible? Why can we render it using equations? Why do we have a rationally transparent world? The Creator: A concept that makes intelligible the intelligibility of the world.
  2. Why is the universe so special? For example the Carbon resonance. Dark energy has been fine tuned to a very small value.
Polkinghorne believes in one universe – the multi universe is speculative and unintelligible.
The universe is endowed with potentiality. The universe is designed to be fruitful. It is not a puppet theatre. Life can make itself.  Mutation needed for evolution tradesoff against cancer. Can not easily separate the benevolent from the malign – inextricably bound up. Hence a universe with ragged edges and blind alleys.

Question & Answer Session
Polkinghorne says he believe in creation and ID.
The IC concept – postulates isolated structures.
American “ID” drives a wedge between the “natural” and God: posits processes where God didn’t have  hand.
The Fall: Down to the self limitng act of God – he gives gifts of free will. Creature make themselves vs puppets. The good has to be balanced against the evil.

Monday, April 26, 2010

Creationism, Interventionism and Deism.

A rather anthropomorphic view of God's activity

In this post on Sandwalk Larry Moran adopts a new term for that category of evolutionist normally referred to as “theistic evolutionists”. The term, borrowed from a blog post by Jerry Coyne , is “New Creationist”. Although I wouldn’t quibble with the use of the word “creationist” here, I would question the appropriateness of the qualifier “new”. In this post on my church blog I submit some historical evidence indicating that the established prewar church was not inclined to question the findings of science, but rather to integrate those findings into its world view. In contrast contemporary Young Earth Creationism is a recent recrudescent phenomenon that started at around the time of the publication of “The Genesis Flood” by Whitcomb and Morris in 1961. So, in actual fact Young Earth Creationism may better qualify for the name “New Creationism”. A more appropriate name for the theistic evolutionists may actually be “establishment creationists” thus describing their identification with mainstream and established science.

In his “new creationist” post Coyne is responding to his antagonist Ervin Lazlo, a philosopher and system theorist. Laszlo must surely understand this subject and yet he is quoted by Coyne as appearing to promote “Hoyles fallacy”, a fallacy which estimates minuscule OOL probabilities by concatenating a set of assumed independent probabilities into a long product series. Naturally Coyne (and myself) would find fault with this kind of procedure. But in a further quote Laszlo appears to show an understanding that evolution requires peculiar preconditions in order to raise its probability to realistic levels – a point of view with which I would concur; if evolution and abiogenesis are facts then the improbability is not to be found in the way suggested by Hoyle’s fallacy, but instead can be traced back to the “one-off” prerequisite mathematical conditions grounded in the physics required by evolution. This “one-off-ness” is, as I have propounded elsewhere, a special case of a more general and abstract thesis that tells us that in the final analysis a great irreducible Logical Hiatus lies at the heart of all finite human theoretical schemes. However, it is the import and interpretation of this inevitable logical hiatus that causes the vexation between atheists and theists. For example, Lazlo effectively waves a red rag to the bull when after noting that evolution is conditional upon particular (and surprising) preconditions he goes on to say:


In the final count the evolution of life presupposes intelligent design. But the design it presupposes is not the design of the products of evolution; it’s the design of its preconditions. Given the right preconditions, nature comes up with the products on her own.

And:

Design is a necessary assumption, because chance doesn’t explain the facts.

Using his own words Coyne renders this sort of argument as follows:

…the evidence for all this is that life is complex, humans evolved, and the “fine tuning” of physical constants of the universe testify to the great improbability of our being here—ergo God.

Evolution started off simple and now many organisms are quite complex. Therefore God.

Here, Coyne is objecting to the God of the gaps argument, an argument whose general form is: “Logical Hiatus, ergo God”. I would concede that given a logical hiatus then an intelligent designer does not necessarily and obviously follow. The atheist has at least some room to play with other ideas in an attempt to “fill the gap” with a non-sentient and elemental cause before he gets to the divine “designer”: For example he might attempt to remove the ultimate improbability of the preconditions needed by evolution with the huge probabilistic resources found in some kind of multiverse model, although this model still inevitably has to make recourse to peculiar preconditions. In fact no matter how one tries to cut it, all human theories have an embedded logical hiatus in the form of given and particular preconditions. This truism leads me to commit myself to the view that logical necessity can only be found in the a priori complex rather than in the simple and elemental algorithmic laws of physics. The elemental is too simple and lacking in degrees of freedom to hide logical self-sufficiency. Therefore I conclude that infinite a priori complexity is the only place left in which Aseity is going to be found, if it can be found at all. Once one takes this conceptual step the possibility of Deity appears at once on one’s conceptual radar.

Although I agree with Laszlo’s theism I would not claim that theism is an obvious inductive leap that automatically follows from the Logical Hiatus that necessarily resides in all finite human theories. The step to theism is less inductive than it is deductive, although it would probably be better to describe theism as a totalizing world view, an all inclusive sense making framework that embraces a wide interdisciplinary experience of life from science, history, philosophy, metaphysics, and personal anecdote - even temperament may have a bearing. In the face of evidence that is sourced so comprehensively, arguments for and against theism will necessarily be narrative intense, absent of killer one-liners and inescapably idiosyncratic; least of all will these arguments meet the strict formal standards of proof that can be demanded of the simple objects dealt with by “test tube precipitating” science. For this reason belief in an intelligent designer is never going to be an obliging, authoritative and publicly shared conclusion. The latter is the preserve of the physical sciences where simplicity of ontology entails greater epistemological tractability.

Although I have some sympathy with Coyne’s objection to the “Logical Haitus, ergo God” type argument, I very much disagree with Coyne’s theology: He portrays the “new creationist” God as a part time deity who occasionally “intervenes”, perhaps only once at the beginning of things:

New Creationism differs from intelligent design because it rejects God’s constant intervention in the process of evolution in favor of a Big, One-Time Intervention,

In fact Laszlo himself encourages this view:

Given the right preconditions, nature comes up with the products of her own. (My emphasis)

The picture is anthropomorphic: The subliminal idea is that God creates in much the same way that a human creator constructs something by configuring elements capable of independent existence. He can then, to a lesser or greater degree, leave His creations to their own devices, perhaps occasionally returning from time to time to “intervene” in the operation of this quasi-autonomous creation. It is ironic that those Christian believers like Robert Sheldon who make a big deal of believing in divine “interventions” are not so far removed from Coyne’s portrayal of the deist’s God: The difference is that Sheldon believes not in a “Big, One-Time Intervention” but “Many-Time Interventions”. Deism lurks threateningly in the background of the Christian interventionist’s philosophy of God.

My concept of God is that of a God who “interrupts” the flow of normalcy rather than “intervenes”; that is, he interrupts or changes His mode of working, a working that in actual fact never ceases: “for in Him we live and move and have our being” (Acts 17:28). When we develop physical theories such as Gravitation or Quantum Mechanics, we do not picture such schemes as doing their work by “intervening” but rather see their action as relentless across all time and space. Likewise, if the ultimate underlying ontology of this universe is the Aseity of deity then I don’t expect that Deity to have the occasional role of the interventionist God, but instead to be a present tense continuous agent. As the sustainer of the cosmic order His role is relentless in time and space, interrupting the normal flow as and when He pleases.


Addendum: 29/4/10
Unfinished Business.
When I wrote about the concept of "divine intervention" here the following comment appeared:

Well it could be worse, we could be dealing with Pandeism, which proposes a God that is a quite logical and scientific entity which engineered a Universe that is truly random, and lacking in any of that unacceptable tinkering....

Clearly the person concerned never got to grips with the difference between "tinkering" and "interruptions". That person never turned up when challenged in a subsequent post and remains on my "unfinished business" list. It is ironic that those who are so vocal about believing in "interventions" support a philosophy that has a close relation with deism: "N interventions" very easily turns into "Zero interventions" when faith falls away and N slides toward zero.