Monday, June 29, 2009

Darwin Bicentenary Part 24: Dembski's Recent Paper

Has Dembski bowled out the evolutionists?

Having completed a first read of Dembski’s paper on “Life’s Conservation Law” I think the essential mathematical lesson is sound - although I haven’t been through the paper with a fine tooth comb. Expressing the main lesson of the paper briefly in my own terms: Given the observable cosmos it follows that probability is asymmetrically distributed over the space of possibility. That is, some possibilities have a disproportionately higher probability than others. If evolution has occurred then it is because this asymmetrical distribution holds in order to considerably enhance the chances of self sustaining structures evolving. This conclusion that probabilities are a-priori weighted in favour of life assumes a “small” universe; that is, a universe where cosmic magnitudes and dimensions are far too small to provide a realistic chance of evolution if probabilities were uniformly (i.e. symmetrically) distributed over the space of possibility. This asymmetrical assignment of probability is what, I assume, Dembski means by information input.

OK, so Demsbki’s basic mathematical lesson looks to be sound, and his work brings out the stark fact that evolution or no evolution biological structures have been resourced by an underserved “free lunch” of highly asymmetrical probability distributions. It gives the lie to the erroneous belief that evolution (if it has happened) is without “direction”; directionality is a form of asymmetry, so in a general sense evolution requires skews and biases in order to enhance the chance of evolutionary developments.

Dembki’s conclusions are, in fact, no surprise even if we assume all that happens under the cosmic Sun is ruled by a standard law and order package. Given that standard science is a descriptive activity which attempts to explicate observations in terms law and disorder objects, then it appears that those objects are an extremely rare class of contingent entity plucked out of the huge space of platonic possibility. Thus, expressing this realization as even-odds probability ratios will lead to minute probabilities unless there is a considerable weighting favouring the physical status quo. Even if we resort to some kind of infinite multiverse in attempt to restore symmetrical distributions of probability we are still left with the implicit asymmetry inherent in the philosopher’s paradox expressed by the question: “Why there is something rather than nothing?”.

The bottom line of Dembski paper is about a logical hiatus that humanly speaking is inescapable. But this stark truth need not be the controversial. Although some atheists may fight shy of Dembski’s maths the strange free lunch that is our cosmos is the same mystery that Paul Davies recognizes and is attempting to grapple with without recourse to theism. What is controversial, needless to say, is the conclusion ID theorists draw from Demsbki’s mathematical treatment; namely that from an apparent undeserved free lunch it follows that a-priori Intelligence must be at the bottom of it all. Atheists, particularly militant atheists, intensely dislike the ID community exploiting highfalutin mathematics in this way. Even as an theist myself, I feel I could not claim on the basis of “No Free Lunch” that the atheist game is up and therefore expect them to follow me into theism; theism is just one time honoured attempt to account for with the problem of the apparent cosmic free lunch. Otherwise I think it is very bad idea to try and intellectually coerce atheists on the basis of an explanatory filter that defaults to Intelligent Design by a process of elimination. True, some atheists are refusing to face their much feared demons by ignoring the meta issues Dembski’s work underlines, but there may be other atheists out there who hold their views with a clear conscience and with intellectual integrity.

But looking at Dembki’s paper it seems that the game is not up for the evolutionists either. In fact Dembski seems to be saying that provided we accept that the cosmos has a considerable burden of “active information” (That is, an asymmetrical distribution of probability) then evolution is a possibility. For example, Dembski quotes a rather puzzled Stuart Kaffmaun who is confounded about the perplexing free lunch that evolution seems to require. As Kauffmann says:

Where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?

Dembksi then says “According to Kauffman, ‘No one knows’” and then Dembski goes on to say:

Let’s be clear where our argument is headed. We are not here challenging common descent, the claim that all organisms trace their lineage to a universal common ancestor. Nor are we challenging evolutionary gradualism, that organisms have evolved gradually over time. Nor are we even challenging that natural selection may be the principal mechanism by which organisms have evolved. Rather, we are challenging the claim that evolution can create information from scratch where previously it did not exist. The conclusion we are after is that natural selection, even if it is the mechanism by which organisms evolved, achieves its successes by incorporating and using existing information.

Dembski says here: “we are challenging the claim that evolution can create information from scratch where previously it did not exist”. If evolution has occurred that challenge still holds good in as much as the low probabilities of life can only become high probabilities when conditioned on the contingencies of our particular cosmic particulars. Those particulars in turn seem to have been chosen from a vast space of possibility and this choice constitutes the information input that Dembski requires. But surprisingly it is here that we find Dembski agnostic about just how this unwarranted information might be input, for he is not challenging common descent and evolutionary gradualism or even natural selection as the principle mechanism by which organism have evolved. He is simply telling us (and I agree with him) that if evolutionary mechanisms have resulted in life they can only do so with a considerable burden of “active information”. So the evolutionists are still in bat and have yet to be bowled out yet.

Although Dembksi makes the theoretical concession that evolution can work given the right informational preconditions, we might expect him to not support evolution in practice as the means by which life has come to be. But in this connection Dembski says this:

The search algorithms in evolutionary computing give rampant evidence of teleology—from their construction to their execution to the very problems they solve. So too, when we turn to evolutionary biology, we find clear evidence of teleology: despite Dawkins’s denials, biological evolution is locating targets. Indeed, function and viability determine evolution’s targets (recall section 3) and evolution seems to be doing a terrific job finding them. Moreover, given that Darwinian evolution is able to locate such targets, LCI underwrites the conclusion that Darwinian evolution is teleologically programmed with active information.

Let me get this straight: “when we turn to evolution we find clear evidence of teleology …”? Is Dembski saying that evolution, albeit with the right information conditions, has actually happened? Is Dembski, bit by bit, being taken down the evolutionary path? In the post on Uncommon Descent where Dembski introduced his paper I left a comment that seconded a comment left by another commentator named “Mapou”. Here is my comment:

11 Timothy V Reeves
05/02/2009
7:06 am
Thanks very much; this looks very interesting; I’ll give it a good look.
Mapou says at 9:
It seems to me that what this paper is saying is that evolution was designed, and I agree.
…that might be the opinion I’m beginning to form as well.

Immediately after my comment Dembski followed it with this comment:

12 William Dembski
05/02/2009
7:33 am
I urge people to read the paper before commenting......

Well having since read the paper, I think I would not amend my first impression!

Whether or not Dembski is a closet or crypto-evolutionist, there is one issue, and very one important to my mind, which Dembski’s paper does not address but which is touched upon in the following quotes taken from Dembski:

It follows that Dawkins’s characterization of evolution as a mechanism for building up complexity from simplicity fails. For Dawkins, proper scientific explanation is “hierarchically reductionistic,” by which he means that “a complex entity at any particular level in the hierarchy of organization” must be explained “in terms of entities only one level down the hierarchy.” Thus, according to Dawkins, “the one thing that makes evolution such a neat theory is that it explains how organized complexity can arise out of primeval simplicity.”
…..Conservation of information shows that Dawkins’s primeval simplicity is not as nearly simple as he makes out the success of evolutionary search depends on the front-loading or environmental contribution of active information. Simply put, if a realistic model of evolutionary processes employs less than the full complement of fitness functions, that’s because active information was employed to constrain their permissible range.

As I have related in previous posts in this series, it is not clear to me that simplicity of explanation is ruled out by Dembski’s valid point about presence of active information. For it is far from clear whether or not the relatively simple cosmic law and disorder package of physics is one of those rare objects which encodes complexity via simplicity of expression. The complex “front loaded” fitness function which Dembski talks about may be implicit in a standard physics package which succeeds in encoding a reducibly complex morphospace. In this morphospace we may find smooth transitions of self sustaining structures, thus allowing evolutionary diffusion to do its work in creating them.

I don't suppose the evolution/ID debate will ever be Cricket

Friday, June 19, 2009

Characters of the Wild Web 14: The are some good guys out there after all.

Thursday, June 18, 2009

Darwin Bicentenary Part 23: Dembski’s Specified Complexity:

William Dembski quantitatively defines specified complexity as follows. If T is an observed pattern then the specified complexity of this pattern, C, is given by

C = - log[R S(T) P(T)] ,

Where:

R represents the replication resources, that is, the number of opportunities where the pattern T might arise. For example, the opportunity count might be proportional to the size of the universe in time and space because clearly the larger the universe (or the multiverse) then the greater the number of opportunities for T to arise. From the above equation it follows that C decreases with increasing R.

S(T) is the number of patterns with a Kolmogorov complexity no larger than pattern T. Kolmogorov complexity (=K) has the characteristic that the number of patterns consistent with a particular value of K increases as K increases. For example, the class of disordered patterns typically generated by, say, the tosses of a coin have a maximum K and correspondingly there are a very large number of such disordered patterns with the same value of K. Thus, it is clear that S(T) increases sharply as the frequency profiles of pattern T move toward a maximum disorder frequency profile. Looking at the definition of C it is clear that increasing S decreases the value of C. This means, all other factors being held constant, that complex patterns will have a lower specified complexity than simpler patterns with a lower Kolmogorov complexity. Admittedly this is a little bit confusing: Dembski’s specified complexity decreases with increasing Kolmogorov complexity.

P(T) is the probability of T should it have occured by ‘chance’. That is, if we advance the “null hypothesis” that T is the outcome of a stochastic process, then P(T) is the probability of T occurring as result of this process. Clearly C decreases with increasing P.

Rationale: Dembski’s definition of C is intended to act as a detector of intelligently contrived artifacts. Patterns with high values of C are supposed to alert observers to the presence of intelligent contrivance whereas low values of C are supposed to indicate that the pattern observed could (but not necessarily) be down to “natural causes”. Thus the rationale of specified complexity is predicated on the assumption that “natural causes” and “intelligent contrivance” are distinct categories. In this connection notice that C decreases with increasing R, S and P. R, S and P can be thought of as a measure of the “natural” resources available to create C; the greater these are, then the more plausible the hypothesis that pattern T is an outcome of “natural” resources, and consequently the lower the value of C. P is the probability of a “natural” physical system generating T, and R is the number of natural systems available to generate T. The higher the value of P and R then the greater are the natural resources available to create T. In the quantity S Dembski is, I guess, trying to capture the intuition that ordered patterns are much rarer than disordered patterns, and therefore disordered patterns are somehow easier for natural systems to “find” and generate than are ordered patterns, which as a class are few and far between.

What follows are some examples where I attempt to qualitatively apply Dembki’s definition of specified complexity.

Example 1: Coin Tossing: Let us apply Dembski’s definition of C to a pattern T that we hypothesize to be generated by 1000 tosses of a coin. Firstly, the pattern produced would have a very low probability because it is a unique pattern taken from 2 power 1000 possibilities; as all these possibilities have an equal probability the actual pattern probability P(T) would be very small at 2 ^ -1000. The value R would be number of times in the history of the world 1000 tosses had been carried out. Clearly since it is very likely that R is going to be much smaller than 2 ^ 1000, it follows that R x P is still going to be a very small value. Consequently, the product R x P by itself generates a pressure toward a very high specified complexity. However, we are still left with the value S(T) to take cognizance of. The number of patterns up to and including those with the high Kolmogorov complexity associated with typical patterns generated by randomness is going to be very high; in fact in the order of 2 ^ 1000. Thus, if the pattern we are looking at is typical of the highly disordered patterns generated by coin tossing then S(T) is going to have a value in the order of 2 ^ 1000. The result is that the high S(T) and low P(T) cancel out to produce a mediocre specified complexity for patterns typically generated by a coin tossing. Given that a high value of C is supposed to indicate the presence of an intelligent agent behind a pattern, then the mediocre values of C generated by a pattern that looks to be typical of the unintelligent process of coin tossing is to be expected.

Example 2: Jumbled Junkyard: Now consider the example of a junkyard. Assuming the junkyard workers are breaking and mixing up the stuff delivered to their care without much intelligence or sorting then the configuration of the junkyard contents will very likely be highly disordered; that is the junkyard contents will be Kolmogorov complex. Hence given the probabilities and replication resources available to the junkyard connection then we expect, using logic parallel to that of coin tossing, a jumbled junkyard to return a low specified complexity.

But now imagine that we discover the miscellany of items in a junkyard to have somehow become assembled into a functioning vehicle of some kind, like, say, an aircraft. Such a configuration has a very low probability under junkyard conditions. It also has a relatively low Kolmogorov complexity. Moreover, all the junkyards in the world are unlikely to provide sufficient replication opportunities to compensate for the very low P and S. Hence, given the “null hypothesis” that the aircraft has come together by chance we would calculate a very high value of C. This high value of C would alert us to suspect that some intelligent agent has been at work rather than the aircraft being a chance product of the normal workings of a junkyard and we would very probably be right.

Example 3: Sorted Junkyard: Let us now imagine that we found the items in a junkyard to be sorted into categories and neatly stacked into very regularly spaced piles. As for the discovery of an aircraft made from junkyard detritus such a configuration would return a high specified complexity rightly leading us to the conclusion that the junkyard employees are using their intelligence to organize the junkyard. However, there is one surprise for us in this expected result. Very regular configurations have a lot lower Kolmogorov complexity than something as complex as an artifact like an aircraft. Thus, according to Kolmogorov simple structures, like a very regular and neatly stacked junkyard, would rate as a lot less Kolmogorov complex than an aircraft constructed from junkyard parts. So, using Dembski’s definition a highly organized junkyard would return a very high specified complexity, a lot higher than a complex artifact like an aircraft. This character of Dembski’s definition is, I feel, a little unnatural; are we to conclude that the intelligence required to neatly sort a junkyard is greater than that needed to conceive and assemble an aircraft? My guess is that Dembski would reply that C is meant only to flag the presence of intelligence rather than measure intelligence.

Example 4: Crystal Lattice: What about something like a crystal lattice? This is analogous in configuration to the neatly and regularly stacked junkyard. So does this mean we return a high specified complexity? The answer is no. This is because using the null hypothesis that the crystal lattice is the product of a physical regime that is the right mix of atomic forces, mineral concentrations, temperatures and pressures, P(Crystal) is very high. This high probability will considerably depress the value of C. To cut a long story short, standard physics and chemistry implies that highly ordered and unrepresentative crystal patterns can be built up using cumulative ratchet probabilities which, of course, implies a probability of formation much higher than the straight spontaneous formation values. Hence, if we observe a crystal and hypothesize that it is a product of standard physics and chemistry, Dembski’s definition will assign crystals a low specified complexity, thus telling us that they could conceivably be a result of natural processes.

Example 5: Biological Structures:
On the basis of the foregoing examples Dembki’s definition seems to do its job. It returns low specified complexities for coin tossing, a muddled junk yard, and for crystals; in each case indicating that intelligence is not necessarily to be inferred as the source of these patterns. In contrast C has a high value for an aircraft assembled from junkyard detritus, thus leading us to suspect intelligence at work. The equation for C also returns a rather (surprisingly) high specified complexity for a well sorted junk yard, which once again is likely to be a product of junkyard employees using their intelligence to sort the material in their care

OK so having successfully “dry run” Dembski’s definition for some test cases we know about, what about the big 64 billion dollar question of living structures? Firstly, it is seems fairly self evident that organic structures are highly organized. This means that relative to the huge space of possibility, the number of possible patterns with Kolmogorov complexity no larger than organic structures is tiny, thus pushing up the specified complexity of organic configurations. The replication resources are difficult to evaluate but it is likely that R will not be in excess of any of the cosmic pure numbers which are seldom, if ever, greater in value than 200 digits in length. It might therefore seem that R could be very large but a number of less than 200 digits is miniscule compared to the space of possibility whose dimensions are measured with numbers that run into billions of digits and beyond. Thus, it is clear that even cosmic sized replication resources are utterly inadequate produce the cover needed to find the members of a relatively tiny set of organic configurations lost in a truly immense space of possibility. Moreover, if probability is uniformly spread over the entire space of possibility it follows that P(T), where T = [organic configuration], will be tiny. Hence the product R.S(T).P(T) will be very small indeed, thus returning a very high specified complexity when plugged into Dembski’s equation. So, it looks as though organic structures return a high specified complexity, alerting us to the likely hood that intelligent “intervention” was required in the creation of life. So, do we conclude ID theory is vindicated? If only things were that simple.

In the above considerations I have assumed cosmic replication resources are relatively low (baring multiverse speculations) when compared to the immensity of “possibility space”. I have also assumed that life has a relatively low S(T). I don’t think these two assumptions are controversial. However P(T) is a different matter; it is the fudge factor that can hide so much. It is far from clear that probability is spread uniformly over the space of possibilities; in fact we know it isn’t. The high probability of the formation of crystals, structures that are above all the most unrepresentative of configurations, tells us that it isn’t. Crystals would have a high specified complexity were it not for the fact that the “law and disorder” of the physical regime facilitates the cumulative/ratchet formation of crystals molecule by molecule; crystals are, in fact, reducibly complex patterns. If the physical regime is such that it allows the "ratchet" formation of a configuration then the probability of its formation is, of course, far greater than the corresponding probability of spontaneous formation in one grand slam miracle. This takes as right back to the core ID issue of irreducible complexity: If organic structures , like crystalls, are reducibly complex then that means we have the organic equivalent of a ratchet which would increase P(Organic) to the point that C becomes low. The ID theorists will of course tell us that no biological equivalent of a “ratchet” exists and that organic structures are irreducibly complex. Well, that may be case but it is clear that Dembski’s definition doesn’t provide the answer to that question; his definition will only return a high specified complexity for organic structures if P(Organic) is low, and one way of ensuring that P(Organic) is low is to postulate that organic structures are irreducibly complex.

At first sight Dembski’s definition seems to successfully flag the presence of intelligent activity, although on the important question of biological structures the definition is hamstrung by the controversy over irreducible complexity. That apart, however, I do have three other comments I would like to make about Dembski’s definition.

Comment 1: The ID community as whole make a sharp distinction between intelligent causes and natural causes. At the extremes of the spectrum, say, when trying to distinguish between human and “natural” activity, this distinction may be clear cut. But measures of intelligence seem to run on a continuum scale. Moreover, it may be (although it is yet far from proved) that human intelligence is a highly complex structure that can be explicated in terms of a convoluted application of a law and disorder ontology; if so then there will be no vitalistic criteria providing us with a clear cut demarcation between "animal and mineral" (another way saying "intelligence and non-intelligence"). What happens, then, to Dembski’s definition in the intermediate region between intelligence and non-intelligence? The examples I have worked through seem to succeed only because I was looking at cases where the distinction between intelligent causes and natural causes is clear.

Comment 2: Specified complexity, C, is a monotonically decreasing function of increasing Kolmogorov complexity. Kolmogorov complexity is a measure of a pattern’s disorder, so as disorder increases then S(T) increases, which in turn means that specified complexity decreases with increasing disorder. All other factors being held constant it follows, then, that highly organized patterns like simple periodic arrangements return higher specified complexities than some complex artifacts. This is a little counter intuitive; simple periodic structures seem to require less and not more intelligence in their construction than an object of intermediate complexity like, for example, an aircraft.

Comment 3: Natural causes also generate simple ordered patterns like, say, crystals or regular planetary orbits. Although S(T) is low for these “natural” patterns, according to Dembski they actually have a low specified complexity on account of their high probability, as conditioned on natural law. It does seem ironic, however, that the simple patterns described by natural law, which in any other context have a high specified complexity flagging the likelihood of an intelligent source, have a low specified complexity. This is because Dembski’s definition of specified complexity is predicated on a sharp distinction between “natural causes” and “intelligent causes”, and the latter may be a cryptic way of referring to “supernatural causes”. In the world of ID theory “natural causes” and “intelligent causes” are competing tentative candidates for the explanation of a pattern. This competing candidacy is certainly valid within the confines of the cosmic context where in most cases clear cut categories of natural causes and intelligent agents operate, and where it is of interest to science to determine which category is the cause of a pattern. But when it comes to questions of the primary explanatory ontology underlying the cosmos a presumed paradigm that dichotomizes natural causes and intelligent causes create a tension that, I submit, is all too easily resolved in favour of natural causes. For the ID community have taken onboard a paradigm with implicit presumptions that commit them to deriding the efficacy of so called “natural causes”, even to the extent that such causes are considered to be of trivial import, utterly incapable of any real creative potency and if anything only the source of decay; their chief role is to serve as a paragon of elemental non-sentience to be contrasted unfavourably against the role of the “unknown” intelligence that has created life. The paradox is that it is these "natural causes" which give us highly ordered patterns that in any other circumstance return the extremely high specified complexities reckoned by Dembski to be a sign of intelligent activity!

The so called “natural causes” are to be identified with what ID theorists frequently call “chance and necessity” (what I refer to as “law and disorder”) or when referring specifically to evolution “natural selection and random mutation”. The ID category distinction of intelligence vs. natural causes, when applied to the outer frame of our cosmos, helps pave the way for a philosophical dualism, even a crypto “natural vs. supernatural” dualism of two competing potential creators, one inferior to the other. This dualism detracts from the enigma of natural causes and the question of their ultimate origins, reinforcing the “naturalness” of natural causes in as much as they are taken for granted to be an almost logically self sufficient category as would befit their assumed P(T) ~ 1 status. So, if natural causes are so …well, natural, almost to the extent they begin to assume an existential self-sufficiency and axiomatic status, then why do we need to venture beyond them and introduce such a complex and confounding explanatory object such as a-priori intelligence? Atheists would agree. For if, in the atheist view, all the patterns of the cosmos can be explicated in terms of law and disorder then according to Richard Dawkins we can all become “intellectually fulfilled atheists” and no longer need look for a primary ontology in the middle ground between law and disorder. For the atheist intuition is that natural causes are tantamount to a logically self sufficient and/or axiomatic ontology. The irony is that the ID community seems to unintentionally connive with those who believe that natural causes are a basic elemental category with no problematical existence.

Stop Press 19/6/09

Some indication of the upstart bogey that "natural causes" have become in the mind of ID theorists is suggested by these posts on Uncommon Descent:

http://www.uncommondescent.com/intelligent-design/the-nature-of-nature-edited-by-bruce-gordon-and-william-dembski/
http://www.uncommondescent.com/education/evolution-was-the-key-in-joseph-campbells-loss-of-faith/

In particular this quote from William Dembski is telling:

....when I studied ancient near eastern cosmologies at Princeton Theological Seminary, I found an interesting thing: they divided into cosmologies in which creation occurs through a spoken word by a supreme deity (the biblical cosmology was not unique in this regard) and cosmologies in which natural forces evolve and do all the creating, producing better and more powerful deities as time flows along (e.g., the Babylonian creation, in which Marduk is born several generations down and finally becomes the chief god).

Wednesday, June 10, 2009

Darwin Bicentenary Part 22: Of Psychos and Anti-Theorists

I was interested to read this post on Uncommon Descent by Cornelius Hunter about epigenetics. Of particular interest are Hunter’s comments on his web page (here) where he remarks on the apparent rapidity of evolutionary change. (In fact his whole web page is worth keeping tabs on)

Some people have some very high stakes in the evolution/ID debate; the problem/puzzle/mystery solving approach is not for them because the choice of evolution or ID is bound up with life style choices, viz: world views, vested interests, social networks and above all social status. Thus, the evolution/ID question often takes on more the character of a war of badly managed anger than a fascinating riddle.

For example, if evolution is false and/or needs radical modification (to account for rapid evolutionary change, say) then I have grave doubts that some high stake evolutionists will be dispassionate enough to handle the evidence with integrity. Some would perhaps sooner kill (of the “character assassination” kind I must add!) rather than admit to weaknesses in a cherished theory.

However, turning to Hunter’s work I was interested to find out whether he was proposing a radical modification to evolutionary mechanisms or whether he accepted common descent or even whether he was a YEC. Scanning his web page I was unable to find answers to these questions, for, alas, his web page seems to be largely anti-theory.

Characters of the wild web number 13
Evolution: The Cool, The Enraged and the Desperate

Pending Position Statement

As a result of direct inquiries I intend to produce, at some stage, a position statement regarding my views on Christianity. However, I am currently absorbed with one two other matters that I am following up; hence this promissory note.

Friday, June 05, 2009

Intellectual Hegemony and The Thought Police

Characters of the Wild Web Number 12: This guy was caught subverting Law and Disorder

In this blog post scientific lawman Larry Moran introduces a “world view continuum” that runs all the way from Young Earth Creationists, through ID theorists and Theistic Evolutionists to Materialist Evolution. Larry objects that this spectrum is misleading because in his view there is a qualitative discontinuity somewhere between theistic evolution and materialist evolution. His motivation, I suspect, is that he wants to divide the world up into two opposing categories of people; fundamentalist materialists and the methodological law breakers he would consider guilty of “anti-science” subversion and superstition.

In order to try and get a handle on Larry’s views I traced a link back to an earlier essay of his entitled “Theistic Evolution: The Fallacy of the Middle Ground”. In this essay Larry starts with a quote from Eugenie Scott which tells us that “science restricts itself to explaining the natural world using natural causes”, a statement that is all but tautological; with this statement in mind we might be tempted to explain matter and energy using “natural causes” such as…er…matter and energy. However, Larry goes on to quote Michael Ruse who is worth quoting; but then Ruse is a philosopher with Quaker roots.

Michael Ruse: The methodological naturalist is the person who assumes that the world runs according to unbroken law; that humans can understand the world in terms of this law; and that science involves just such understanding without any reference to extra or supernatural forces like God. Whether there are such forces or beings is another matter entirely and simply not addressed by methodological naturalism. Hence, although indeed evolution as we understand it is a natural consequence of methodological naturalism, given the facts of the world as they can be discovered, in no sense is the methodological naturalist thereby committed to the denial of God's existence. It is simply that the methodological naturalist insists that, inasmuch as one is doing science, one avoid all theological or other religious references. In particular, one denies God a role in creation. [Michael Ruse (2002) "Methodological Naturalism Under Attack" p. 365]

Not bad; at least that’s intelligible and I think I can go along with most of it. Leaving aside Ruse’s references to God the crucial part of Ruse’ statement is this: “The methodological naturalist is the person who assumes that the world runs according to unbroken law; that humans can understand the world in terms of this law…” I identify Ruse’ “unbroken law” with my concept of “Law and Disorder”; that is, an assumed ontology whose behavior is described using a combination of only two mathematical objects: short algorithmic laws and statistics. These latter two objects, which respectively reside at the extreme ends of the order/disorder spectrum, describe many if not all the patterns in our world. Thus, I conclude that “methodological naturalism” is an epistemology based on the assumption of an ontology that is subject to a regime of Law and Disorder.

So, given the foregoing definitions I’m a “methodological naturalist” – almost, but not quite. I say “almost” because I keep a corner of reservation in my mind as I do about most things. Firstly, strict “methodological naturalism” is simply not a practical proposition. Many historical objects, for example, cannot be encapsulated using law and disorder only; the objects of history are in the main intrinsically complex and history is likely to remain an irreducibly narrative intense subject even if in principle history is ultimately the outcome of some kind of fundamental “law and disorder” physics like String Theory. But I would go even further than this “in practice” critique of methodological naturalism. In fact I would want to express a fundamental skepticism and reservation about it. Although I favour and even like the idea that the cosmos is in principle (if not in practice) descriptively reducible to some fundamentalist vision of law and disorder, I certainly have my doubts. My position then is that of a skeptical methodological naturalist. As far as I’m aware there may be objects out there that are not descriptively reducible to the two classes of mathematical objects that come under the rubric of “law and disorder”. In fact it just seems all to humanly convenient to posit a world that can be exclusively rendered in law and disorder terms which, surprise, surprise, make that world conceptually and epistemologically amenable to human probing. Isn’t somebody just trying to defend their intellectual comfort zone? I find the intellectual hubris behind this kind of unreserved assumption repugnant. Moreover, given that “Law and Disorder” objects are those that are most amenable to scientific epistemology, is it any surprise that the most vociferous proponent in favour of the view that the categories of “Law and Disorder” are exhaustive, turns out to be – yes, you’ve guessed it – a career scientist.

It is no surprise then that Larry’s essay proceeds by assuming the foregoing concept of naturalism. In fact he seems to simply take it for granted that methodological naturalism is exclusive and nowhere in his essay does he attempt to justify this exclusiveness. He just asserts it as rule of science; end of story, so shut up! Unlike Ruse who may well be agnostic about objects transcending the categories of Law and Disorder Larry Moran takes a dim view of anyone who might delve into the realms beyond his stated terms of reference. Consider the following passages, for instance, which are taken from his essay:

Scientists, on the other hand, argue that an interventionist God who guides evolution violates the rules of science.

If science really does have to be strictly naturalistic, then even the softest version of intelligent design—that promoted by Michael Denton—is ruled out because God creates the laws of physics and chemistry. This point is worth emphasizing. If one's explanation of the natural world posits a God who created the laws of physics and chemistry then one is not behaving like a scientist. Of course, there's even more of a conflict if one's God is supposed to have set up the universe in order to produce humans.

On the surface it seems that all forms of religion conflict with science in one way or another. It seems as though there's no room at all for religious explanations of the natural world as long as we agree that scientists have to stick to naturalism. Do scientists really insist on this restriction? Yes, they do.

As Popper grappled with the nature of science and attempted to characterize ,it he insisted that he was not presuming to draw a line round the whole of rational discourse, but instead was trying to draw a line within it*. Would Popper have used emotive language like “violates the rules of science” or “supernatural explanations” to describe discourse beyond the realm amenable to “Law and Disorder” science? Some atheists, on the other hand, seem determined to annex the whole of rational discourse into a catholic empire of “Law and Disorder”. What makes me suspicious of them is that they are not content to accept that the efficacy of scientific epistemology fades at the edges as ontology moves into the complex middle ground between law and disorder. Instead they are proactively campaigning against middle ground ontologies and define those active in such areas as anti-science and anti-rational. They seek universal conformity to their proprietary concept of “behaving like a scientist”, the only valid form of epistemic behavior in their view. They don’t recognise limits to science simply because everyone, without exception, is supposed confine their discourse within the limits of a science beyond whose scope nothing else is presumed to validly exist. This is simply unfashionable cultural imperialism.

Presumably God falls outside the law and disorder category and into the “supernatural” middle category because He/She/It is a-prior complex and narrative intense. Therefore someone like myself, who at the very least is conjecturing about deity and who is giving plenty of space to ID theorists, would be portrayed as in conflict with science, perhaps even hostile to it, simply because the middle ground between law and disorder is excluded a-priori as an illegitimate realm of study for any human being; or to put it in Larry’s terms “Do scientists really insist on this restriction? Yes, they do.” Well, we certainly know one scientist who insists on it.

It would be wrong and unfair to suggest that all atheists are enthralled by a kind of intellectual hegemony. People like Paul Davies realise that the deeper questions of primary ontology and aseity are still up for the grabs and therefore open to speculation. In his book “The Goldilocks Enigma” Davies submits his own (non-theistic) speculations for consideration. But when delving into such questions it is by no means obvious why the ontology that sustains and creates our universe should be a law and disorder ontology, or even an ontology that is easily accessible to science (String theory, although a law and disorder ontology, currently has an accessibility problem). In any case it is far from clear that the problem of absolute necessity will be solved with a law and disorder ontology and in fact may have a solution completely inaccessible to finite minds.

I have no problems with atheism per se; at least in the sense of it being a belief that the underlying primary ontology of aseity is not theistic. What I do have a problem with is intellectual hegemony. Intellectual hegemony may in part be an outcome of a lack of reflexiveness; that is, an inability to turn critical analysis on itself and become aware of its own presumed objects (whether those objects are, say, “science” or “intelligence”). If one can’t see those objects, but instead sees the world through them, then one is unable to submit them to scrutiny. If one perceives that others are not captivated by one’s own intellectual enthrallment then the thought of reviewing one’s enthrallment may be too much to bear. A feeling of unease and even anger may set in and this in turn lead to attempts to police the thoughts of others.

* See Popper’s “Logic of Scientific Discovery”,