"Conservation of Information" ideas may appeal to "God of the Gaps" thinkers..
Although Intelligent Design Guru
William Dembski’s work on the Conservation of Information is, I believe, entirely correct, the definition of
“information” he uses results in his work failing to capture vital aspects of
what we would informally associate with the term “information”. The definition
of information as used by Dembski can be found in this web site article: http://www.arn.org/docs/dembski/wd_idtheory.htm
Criticisms I would make of the
applicability of Dembski’s ideas are:
ONE: Dembski uses the concept of
information as “– log p”, where p is the probability of an event.
From this definition it follows that improbability entails high information. But
this measure of information, although fine for the event-centric world of
communication is not uniquely sensitive to the quantity of information implicit
in a static configuration. The value
“p” could be the probability of a simple single event or it could be the product of a complex configuration
of independent events such that p = P1 x P2 x P3…
Pi ….etc. where Pi is the probability of the
ith event; in short “high information” in Dembski’s sense doesn't
necessarily entail a complex configuration.
One way of quantifying the complex information
in a configuration is to define it as the shortest compressed string that
will describe the configuration in question. Wiki has a similar criticism of
Dembski’s use of the term “information”: See here: http://en.wikipedia.org/wiki/Specified_complexity#Criticisms
TWO: Probability is a function
of our level of knowledge and therefore probability changes with knowledge;
e.g. if we have perfect knowledge about a complex configuration this entails a
configuration probability of 1; that is, a known configuration has a probability
of 1 therefore no information. This conclusion makes sense if we are thinking
about communication, that is about receiving and registering signals; in this
context the reason why a known configuration no longer contains information is
because once it is known it is no longer informative. But if this signal
oriented concept of information is pressed into the cause of measuring configurational
information it tempts some silly conclusions: Let’s assume for the sake of
argument that given the size of the visible cosmos along with its constraining
physical regime, the probability of life being generated is nearly 1. This
quasi determinism implies that life contains next to no information as, of
course, a probability of 1 entails zero information. I’ve actually seen an
argument of this type used on ID website Uncommon Descent (I wish I had stored the
link!). It went something like this:
“Necessity (=The laws of physics) could not have deteministically generated life because that entails a result with a probability of 1 and therefore zero information. Life contains lots of information, therefore it could not have been generated by necessity!”.
This bogus argument is not only using an inappropriate measure of information but is also based on dichotomizing “chance and necessity”, another of the false dichotomies habitually used by the de-facto ID movement. (Although this false dichotomy is another a story)
“Necessity (=The laws of physics) could not have deteministically generated life because that entails a result with a probability of 1 and therefore zero information. Life contains lots of information, therefore it could not have been generated by necessity!”.
This bogus argument is not only using an inappropriate measure of information but is also based on dichotomizing “chance and necessity”, another of the false dichotomies habitually used by the de-facto ID movement. (Although this false dichotomy is another a story)
THREE: If for the sake of
argument we assume the existence of a sufficiently large super-verse where
every conceivable possibility is realized then the communication based concept of
information that Dembski is using once again returns a counter intuitive conclusion;
Viz, that life has no information because in the superverse Prob(Life somewhere)
= 1.
FOUR: In his work Dembski
assumes the principle of equal a priori probabilities amongst possibilities for
which there is no known reason why any of those possibilities should be preferred.
Given that the number of cases favouring life in platonic space is an extremely
tiny proportion of all that is possible it follows that the probability of
life, p, is very small and therefore
life is information packed. So far, so good, I wouldn’t quibble with that.
However, I’m currently working on a speculative theory that posits huge (expanding)
parallel computational resources as a means of finding living structures: In a
parallel scenario where there are multiple trials in parallel, say m parallel trials, then depending on
algorithmic efficiency, it is conceivable that the probability of life could be
as great as m x p. If m, as I eventually
intend to propose, expands rapidly with time, then it follows that the
probability of life also changes rapidly with time; ergo, under these
conditions “information” as the de-facto ID community habitually understand the
term is not conserved.* Moreover, if information is defined in terms of configurational complexity we find that
this can be both created and destroyed and therefore it too is not conserved.
Let me repeat again that none of
this is to say that Dembski’s work is wrong: In particular his "signal" concept of information. (although inappropriate) is conserved when computing resources are conserved. However, Dembski's ideas, when one
starts to move into alternative radical models of computation, fail to
capture some important facets of the situation. Let me just say in finishing
that I can’t help but feel that the reason why the concept of “Conservation of
Information” has struck a chord with the de-facto ID community is because it sits
very well with the “God of the Gaps” concepts that are implicit in the North
American ID community. (See my series on ID guru V J Torley).
Footnote
* Assigning probabilities and “Dembski information” to the computational models themselves is difficult because it is difficult to define classes of favourable cases in relation to total possible cases in the open ended vistas of platonic space.
Relevant Links:
No comments:
Post a Comment