Pages

Thursday, June 18, 2009

Darwin Bicentenary Part 23: Dembski’s Specified Complexity:

William Dembski quantitatively defines specified complexity as follows. If T is an observed pattern then the specified complexity of this pattern, C, is given by

C = - log[R S(T) P(T)] ,

Where:

R represents the replication resources, that is, the number of opportunities where the pattern T might arise. For example, the opportunity count might be proportional to the size of the universe in time and space because clearly the larger the universe (or the multiverse) then the greater the number of opportunities for T to arise. From the above equation it follows that C decreases with increasing R.

S(T) is the number of patterns with a Kolmogorov complexity no larger than pattern T. Kolmogorov complexity (=K) has the characteristic that the number of patterns consistent with a particular value of K increases as K increases. For example, the class of disordered patterns typically generated by, say, the tosses of a coin have a maximum K and correspondingly there are a very large number of such disordered patterns with the same value of K. Thus, it is clear that S(T) increases sharply as the frequency profiles of pattern T move toward a maximum disorder frequency profile. Looking at the definition of C it is clear that increasing S decreases the value of C. This means, all other factors being held constant, that complex patterns will have a lower specified complexity than simpler patterns with a lower Kolmogorov complexity. Admittedly this is a little bit confusing: Dembski’s specified complexity decreases with increasing Kolmogorov complexity.

P(T) is the probability of T should it have occured by ‘chance’. That is, if we advance the “null hypothesis” that T is the outcome of a stochastic process, then P(T) is the probability of T occurring as result of this process. Clearly C decreases with increasing P.



Rationale: Dembski’s definition of C is intended to act as a detector of intelligently contrived artifacts. Patterns with high values of C are supposed to alert observers to the presence of intelligent contrivance whereas low values of C are supposed to indicate that the pattern observed could (but not necessarily) be down to “natural causes”. Thus the rationale of specified complexity is predicated on the assumption that “natural causes” and “intelligent contrivance” are distinct categories. In this connection notice that C decreases with increasing R, S and P. R, S and P can be thought of as a measure of the “natural” resources available to create C; the greater these are, then the more plausible the hypothesis that pattern T is an outcome of “natural” resources, and consequently the lower the value of C. P is the probability of a “natural” physical system generating T, and R is the number of natural systems available to generate T. The higher the value of P and R then the greater are the natural resources available to create T. In the quantity S Dembski is, I guess, trying to capture the intuition that ordered patterns are much rarer than disordered patterns, and therefore disordered patterns are somehow easier for natural systems to “find” and generate than are ordered patterns, which as a class are few and far between.

What follows are some examples where I attempt to qualitatively apply Dembki’s definition of specified complexity.

Example 1: Coin Tossing: Let us apply Dembski’s definition of C to a pattern T that we hypothesize to be generated by 1000 tosses of a coin. Firstly, the pattern produced would have a very low probability because it is a unique pattern taken from 2 power 1000 possibilities; as all these possibilities have an equal probability the actual pattern probability P(T) would be very small at 2 ^ -1000. The value R would be number of times in the history of the world 1000 tosses had been carried out. Clearly since it is very likely that R is going to be much smaller than 2 ^ 1000, it follows that R x P is still going to be a very small value. Consequently, the product R x P by itself generates a pressure toward a very high specified complexity. However, we are still left with the value S(T) to take cognizance of. The number of patterns up to and including those with the high Kolmogorov complexity associated with typical patterns generated by randomness is going to be very high; in fact in the order of 2 ^ 1000. Thus, if the pattern we are looking at is typical of the highly disordered patterns generated by coin tossing then S(T) is going to have a value in the order of 2 ^ 1000. The result is that the high S(T) and low P(T) cancel out to produce a mediocre specified complexity for patterns typically generated by a coin tossing. Given that a high value of C is supposed to indicate the presence of an intelligent agent behind a pattern, then the mediocre values of C generated by a pattern that looks to be typical of the unintelligent process of coin tossing is to be expected.

Example 2: Jumbled Junkyard: Now consider the example of a junkyard. Assuming the junkyard workers are breaking and mixing up the stuff delivered to their care without much intelligence or sorting then the configuration of the junkyard contents will very likely be highly disordered; that is the junkyard contents will be Kolmogorov complex. Hence given the probabilities and replication resources available to the junkyard connection then we expect, using logic parallel to that of coin tossing, a jumbled junkyard to return a low specified complexity.

But now imagine that we discover the miscellany of items in a junkyard to have somehow become assembled into a functioning vehicle of some kind, like, say, an aircraft. Such a configuration has a very low probability under junkyard conditions. It also has a relatively low Kolmogorov complexity. Moreover, all the junkyards in the world are unlikely to provide sufficient replication opportunities to compensate for the very low P and S. Hence, given the “null hypothesis” that the aircraft has come together by chance we would calculate a very high value of C. This high value of C would alert us to suspect that some intelligent agent has been at work rather than the aircraft being a chance product of the normal workings of a junkyard and we would very probably be right.

Example 3: Sorted Junkyard: Let us now imagine that we found the items in a junkyard to be sorted into categories and neatly stacked into very regularly spaced piles. As for the discovery of an aircraft made from junkyard detritus such a configuration would return a high specified complexity rightly leading us to the conclusion that the junkyard employees are using their intelligence to organize the junkyard. However, there is one surprise for us in this expected result. Very regular configurations have a lot lower Kolmogorov complexity than something as complex as an artifact like an aircraft. Thus, according to Kolmogorov simple structures, like a very regular and neatly stacked junkyard, would rate as a lot less Kolmogorov complex than an aircraft constructed from junkyard parts. So, using Dembski’s definition a highly organized junkyard would return a very high specified complexity, a lot higher than a complex artifact like an aircraft. This character of Dembski’s definition is, I feel, a little unnatural; are we to conclude that the intelligence required to neatly sort a junkyard is greater than that needed to conceive and assemble an aircraft? My guess is that Dembski would reply that C is meant only to flag the presence of intelligence rather than measure intelligence.

Example 4: Crystal Lattice: What about something like a crystal lattice? This is analogous in configuration to the neatly and regularly stacked junkyard. So does this mean we return a high specified complexity? The answer is no. This is because using the null hypothesis that the crystal lattice is the product of a physical regime that is the right mix of atomic forces, mineral concentrations, temperatures and pressures, P(Crystal) is very high. This high probability will considerably depress the value of C. To cut a long story short, standard physics and chemistry implies that highly ordered and unrepresentative crystal patterns can be built up using cumulative ratchet probabilities which, of course, implies a probability of formation much higher than the straight spontaneous formation values. Hence, if we observe a crystal and hypothesize that it is a product of standard physics and chemistry, Dembski’s definition will assign crystals a low specified complexity, thus telling us that they could conceivably be a result of natural processes.


Example 5: Biological Structures:
On the basis of the foregoing examples Dembki’s definition seems to do its job. It returns low specified complexities for coin tossing, a muddled junk yard, and for crystals; in each case indicating that intelligence is not necessarily to be inferred as the source of these patterns. In contrast C has a high value for an aircraft assembled from junkyard detritus, thus leading us to suspect intelligence at work. The equation for C also returns a rather (surprisingly) high specified complexity for a well sorted junk yard, which once again is likely to be a product of junkyard employees using their intelligence to sort the material in their care

OK so having successfully “dry run” Dembski’s definition for some test cases we know about, what about the big 64 billion dollar question of living structures? Firstly, it is seems fairly self evident that organic structures are highly organized. This means that relative to the huge space of possibility, the number of possible patterns with Kolmogorov complexity no larger than organic structures is tiny, thus pushing up the specified complexity of organic configurations. The replication resources are difficult to evaluate but it is likely that R will not be in excess of any of the cosmic pure numbers which are seldom, if ever, greater in value than 200 digits in length. It might therefore seem that R could be very large but a number of less than 200 digits is miniscule compared to the space of possibility whose dimensions are measured with numbers that run into billions of digits and beyond. Thus, it is clear that even cosmic sized replication resources are utterly inadequate produce the cover needed to find the members of a relatively tiny set of organic configurations lost in a truly immense space of possibility. Moreover, if probability is uniformly spread over the entire space of possibility it follows that P(T), where T = [organic configuration], will be tiny. Hence the product R.S(T).P(T) will be very small indeed, thus returning a very high specified complexity when plugged into Dembski’s equation. So, it looks as though organic structures return a high specified complexity, alerting us to the likely hood that intelligent “intervention” was required in the creation of life. So, do we conclude ID theory is vindicated? If only things were that simple.

In the above considerations I have assumed cosmic replication resources are relatively low (baring multiverse speculations) when compared to the immensity of “possibility space”. I have also assumed that life has a relatively low S(T). I don’t think these two assumptions are controversial. However P(T) is a different matter; it is the fudge factor that can hide so much. It is far from clear that probability is spread uniformly over the space of possibilities; in fact we know it isn’t. The high probability of the formation of crystals, structures that are above all the most unrepresentative of configurations, tells us that it isn’t. Crystals would have a high specified complexity were it not for the fact that the “law and disorder” of the physical regime facilitates the cumulative/ratchet formation of crystals molecule by molecule; crystals are, in fact, reducibly complex patterns. If the physical regime is such that it allows the "ratchet" formation of a configuration then the probability of its formation is, of course, far greater than the corresponding probability of spontaneous formation in one grand slam miracle. This takes as right back to the core ID issue of irreducible complexity: If organic structures , like crystalls, are reducibly complex then that means we have the organic equivalent of a ratchet which would increase P(Organic) to the point that C becomes low. The ID theorists will of course tell us that no biological equivalent of a “ratchet” exists and that organic structures are irreducibly complex. Well, that may be case but it is clear that Dembski’s definition doesn’t provide the answer to that question; his definition will only return a high specified complexity for organic structures if P(Organic) is low, and one way of ensuring that P(Organic) is low is to postulate that organic structures are irreducibly complex.

Comments

At first sight Dembski’s definition seems to successfully flag the presence of intelligent activity, although on the important question of biological structures the definition is hamstrung by the controversy over irreducible complexity. That apart, however, I do have three other comments I would like to make about Dembski’s definition.

Comment 1: The ID community as whole make a sharp distinction between intelligent causes and natural causes. At the extremes of the spectrum, say, when trying to distinguish between human and “natural” activity, this distinction may be clear cut. But measures of intelligence seem to run on a continuum scale. Moreover, it may be (although it is yet far from proved) that human intelligence is a highly complex structure that can be explicated in terms of a convoluted application of a law and disorder ontology; if so then there will be no vitalistic criteria providing us with a clear cut demarcation between "animal and mineral" (another way saying "intelligence and non-intelligence"). What happens, then, to Dembski’s definition in the intermediate region between intelligence and non-intelligence? The examples I have worked through seem to succeed only because I was looking at cases where the distinction between intelligent causes and natural causes is clear.

Comment 2: Specified complexity, C, is a monotonically decreasing function of increasing Kolmogorov complexity. Kolmogorov complexity is a measure of a pattern’s disorder, so as disorder increases then S(T) increases, which in turn means that specified complexity decreases with increasing disorder. All other factors being held constant it follows, then, that highly organized patterns like simple periodic arrangements return higher specified complexities than some complex artifacts. This is a little counter intuitive; simple periodic structures seem to require less and not more intelligence in their construction than an object of intermediate complexity like, for example, an aircraft.

Comment 3: Natural causes also generate simple ordered patterns like, say, crystals or regular planetary orbits. Although S(T) is low for these “natural” patterns, according to Dembski they actually have a low specified complexity on account of their high probability, as conditioned on natural law. It does seem ironic, however, that the simple patterns described by natural law, which in any other context have a high specified complexity flagging the likelihood of an intelligent source, have a low specified complexity. This is because Dembski’s definition of specified complexity is predicated on a sharp distinction between “natural causes” and “intelligent causes”, and the latter may be a cryptic way of referring to “supernatural causes”. In the world of ID theory “natural causes” and “intelligent causes” are competing tentative candidates for the explanation of a pattern. This competing candidacy is certainly valid within the confines of the cosmic context where in most cases clear cut categories of natural causes and intelligent agents operate, and where it is of interest to science to determine which category is the cause of a pattern. But when it comes to questions of the primary explanatory ontology underlying the cosmos a presumed paradigm that dichotomizes natural causes and intelligent causes create a tension that, I submit, is all too easily resolved in favour of natural causes. For the ID community have taken onboard a paradigm with implicit presumptions that commit them to deriding the efficacy of so called “natural causes”, even to the extent that such causes are considered to be of trivial import, utterly incapable of any real creative potency and if anything only the source of decay; their chief role is to serve as a paragon of elemental non-sentience to be contrasted unfavourably against the role of the “unknown” intelligence that has created life. The paradox is that it is these "natural causes" which give us highly ordered patterns that in any other circumstance return the extremely high specified complexities reckoned by Dembski to be a sign of intelligent activity!

The so called “natural causes” are to be identified with what ID theorists frequently call “chance and necessity” (what I refer to as “law and disorder”) or when referring specifically to evolution “natural selection and random mutation”. The ID category distinction of intelligence vs. natural causes, when applied to the outer frame of our cosmos, helps pave the way for a philosophical dualism, even a crypto “natural vs. supernatural” dualism of two competing potential creators, one inferior to the other. This dualism detracts from the enigma of natural causes and the question of their ultimate origins, reinforcing the “naturalness” of natural causes in as much as they are taken for granted to be an almost logically self sufficient category as would befit their assumed P(T) ~ 1 status. So, if natural causes are so …well, natural, almost to the extent they begin to assume an existential self-sufficiency and axiomatic status, then why do we need to venture beyond them and introduce such a complex and confounding explanatory object such as a-priori intelligence? Atheists would agree. For if, in the atheist view, all the patterns of the cosmos can be explicated in terms of law and disorder then according to Richard Dawkins we can all become “intellectually fulfilled atheists” and no longer need look for a primary ontology in the middle ground between law and disorder. For the atheist intuition is that natural causes are tantamount to a logically self sufficient and/or axiomatic ontology. The irony is that the ID community seems to unintentionally connive with those who believe that natural causes are a basic elemental category with no problematical existence.

Stop Press 19/6/09

Some indication of the upstart bogey that "natural causes" have become in the mind of ID theorists is suggested by these posts on Uncommon Descent:

http://www.uncommondescent.com/intelligent-design/the-nature-of-nature-edited-by-bruce-gordon-and-william-dembski/
http://www.uncommondescent.com/education/evolution-was-the-key-in-joseph-campbells-loss-of-faith/

In particular this quote from William Dembski is telling:

....when I studied ancient near eastern cosmologies at Princeton Theological Seminary, I found an interesting thing: they divided into cosmologies in which creation occurs through a spoken word by a supreme deity (the biblical cosmology was not unique in this regard) and cosmologies in which natural forces evolve and do all the creating, producing better and more powerful deities as time flows along (e.g., the Babylonian creation, in which Marduk is born several generations down and finally becomes the chief god).


No comments:

Post a Comment