(See here for part 1)
In this post on Panda's Thumb mathematical evolutionist Joe Felsenstein discusses the latest attempts by the de facto Intelligent Design community to cast doubt on standard evolution and to bolster their belief that "Intelligence" is the mysterious and "unnatural"
a priori ingredient needed to bring about organic configurations. In this second part I will introduce the subject of "
Algorithmic Specified Complexity" (ASC). The definition of this ID concept can be found in
an ID paper linked to in Joe's post. Joe's point is that whatever the merits or demerits of ASC it is irrelevant to evolution. That may be the case, but more about that in part 3. In this part I want to get to grips with the concept of ASC.
The definition of "
Algorithmic Complexity" (i.e. without the "
specified") is fairly clear; it is the length of the shortest program which will define an indefinitely long sequential configuration. For example if we have an indefinitely repeating sequence like 101010101... it is clear that very a short program will define it. e..g.
For(Ever) { print 0, print 1}. We can see that there are obviously relatively few short programs because a short program string admits relatively few permutations of the available character tokens. On the other hand there is obviously an enormous number of indefinitely long output strings and so it follows that the supply of short programs that can be written and mapped to indefinitely long strings soon runs out. Therefore the only way to accommodate all the possible strings is to allow the program length to also increase indefinitely. It turns out that if programs are to define all the possible output strings available then the program strings must be allowed to grow to the length of the output string. Output strings which require a program of the same string length to define them are categorised as the class of random strings and of maximum algorithmic complexity.
In my paper on Disorder and Randomness I explain why these random strings are of maximum disorder. However, some output strings can be defined with a program string that is shorter than the output string. When this is the case the output string is said to be
randomly deficient and of less than maximum algorithmic complexity.
So far so good. But it is when we add that vague term "specified" between 'Algorithmic' and 'Complexity' that the sparks start to fly. What does 'specified' mean here? Joe Felsenstein says this of "
Algorithmic Specified Complexity" (ASC):
Algorithmic
Specified Complexity (ASC) is a use of Kolmogorov/Chaitin/Solomonoff (KCS)
Complexity, a measure of how short a computer program can compute a binary
string (a binary number). By a simple counting argument, those authors were
able to show that binary strings that could be computed by short computer
programs were rare, and that binary strings that had no such simple description
were common. They argued that that it was those, the binary strings that could
not be described by short computer programs, that could be regarded as
"random".
ASC reflects
shortness of the computer program. In simple cases, the ASC of a binary string
is its "randomness deficiency", its length, n, less the length of the
shortest program that gives it as its output. That means that to get a genome
(or binary string) that has a large amount of ASC, it needs long string that is
computed by a short program. To get a moderate amount of ASC, one could have a
long string computed by medium-length program, or a medium-length string
computed by a short program. Randomness deficiency was invented by information
theory researcher Leonid Levin and is discussed by him in a 1984 paper (here).
Definitions and explanations of ASC will be found in the papers by Ewert,
Marks, and Dembski (2013), and Ewert, Dembski and Marks (2014). Nemati and
Holloway have recently published a scientific paper at the Discovery
Institute's house journal BIO-Complexity, presenting a proof of conservation of
ASC. There has been discussion at The Skeptical Zone of the technical issues
with ASC -- is it conserved or is it not? In particular, Tom English (here and
here) has presented detailed mathematical argument at The Skeptical Zone
showing simple cases which are counterexamples to the claims by Nemati and
Holloway, and has identified errors in their proof. See also the comments by
English in the discussion on those posts.
As far as
my understanding goes Felsenstein has given us a definition of "
Algorithmic Complexity" and not "
Algorithmic Specified Complexity", a notion which seems to be proprietary to de facto ID. So, in doubt I reluctantly turned to
the scientific paper by Nemati and Holloway (N&H). They define ASC as:
ASC(x, C, p) := I(x) − K(x|C).
1.0
Where:
1. x is a bit string generated by some stochastic process,
2. I(x) is the Shannon surprisal of x, also known as the
complexity of x, and
3. K(x|C) is the conditional algorithmic information of
x, also known as the specification
Note:
I (
x) =
−log2(
p(
x) ) where
p is the probability of string
x.
This definition is somewhat more involved than basic
Algorithmic Complexity. In 1.0 ASC has been defined as the sum of
I and
−K. Moreover, with ordinary algorithmic complexity
K(
x) represents the shortest program that will generate and define string
x, but N&H have used the quantity
K(
x|
C) which is the shortest program possible
given access to a library of programming resources
C. These resources could include data and other programs This library effectively increases the length of the program string and therefore an output string which are otherwise inaccessible to programs of a given length may then become accessible. But although a program using a library string can define output strings unreachable to similar length programs without a library, the number of possible strings that can be mapped to is still limited by length of the program string,
The motives behind N&H's definition of ASC, and in
particular the motive for using conditional algorithmic information, they put like this (my emphases):
We will see that
neither Shannon surprisal nor algorithmic information can measure meaningful information.
Instead, we need a hybrid of the two, known as a randomness deficiency, that is
measured in reference to an external context.....
ASC is capable of
measuring meaning by positing a context C to specify an event x. The more
concisely the context describes the event, the more meaningful it is. The event
must also be unlikely, that is, having high complexity I(x). The complexity is
calculated with regard to the chance hypothesis distribution p, which represents
the hypothesis that x was generated by a random process described by p,
implying any similarity to the meaningful context C is by luck. ASC has been
illustrated by its application to measure meaningful information in images and
cellular automata .
The use of context
distinguishes ASC from Levin’s generalized form of randomness deficiency in (8)
and Milosavljevi´c algorithmic compression approach. The fundamental advantage
is that the use of an independent external context allows ASC to measure
whether an event refers to something beyond itself, i.e. is meaningful. Without
the context, the other randomness deficiencies perhaps can tell us that an event
is meaningful, but cannot identify what the meaning is.
Thus, ASC’s use of
an independent context enables novel contextual specifications to be derived
from problem domain knowledge, and then applied to identify meaningful
patterns,such as identifying non-trivial functional patterns in the game of
life
We can perhaps better understand N&H's motives for ASC if we consider the examples they give. Take a process which generates a highly computable sequence like:
2.0
Presumably this highly ordered sequence reflects a process where the probability of generating an "A" at each 'throw' is 1. Although human beings often like to create such neat sequences so do the mindless processes of crystallisation; but for N&H crystallisation is hardly mysterious enough to classify as an intelligent agent. N&H would therefore like to eliminate this one from the inquiry. Hence in equation 1 it is clear that for a sequence like 2.0 −log2(p(x)) = 0 and therefore although the program needed to print 2.0 is very short we will have K(x|C) > 0 and it follows by substitution into 1.0 that the ASC value associated with 2.0 is negative. i.e. low!
Now let's try the following disordered sequence which presumably is generated by a fully random process:
HNKRCYDO_BIIIEDWPBURW_OIMIBT......etc
3.0
Here −log2(p(x)) will be very high; on this count alone 3.0 contains a lot of information. But then the value of K(x|C) will also be high because for a truly random sequence even the limited resources C will be insufficient to successfully track a truly random source. We therefore expect the algorithm needed to generate this kind of randomness to be at least as long as 3.0. Since −log2(p(x)) will return a bit length of similar size to K(x|C) then ASC ~ 0.
It follows then that both 2.0 and 3.0 return low values of ASC as expected.
Let us now turn to that character string immortalised by those literary giants Bill Shakespeare and Dick Dawkins. Viz:
METHINKS_IT_IS_LIKE_A_WEASEL
4.0
It is very unlikely, of course, that such a configuration as this is generated by a random process. For a start using
my configurational concept of disorder texts by Shakespeare and Dawkins will not return a disordered profile. However, the probability of 4.0 appearing by chance alone is very small. Hence
−log2(p(x)) will be large. But
K(
x|C) will be small because it taps into a large library of mental processing and data resources. Therefore the net result is that equation 1.0 is then the sum of a large positive and a small negative and so voila! it returns is a high value of ASC, which is what we want; or shall I say it is what N&H want!
***
So having assumed they have arrived at a suitable definition of ASC N&H then go on to show that ASC is conserved. But according to Joe Felsentsein:
There has been
discussion at The Skeptical Zone of the technical issues with ASC -- is it conserved
or is it not? In particular, Tom English (here and here) has presented detailed
mathematical argument at The Skeptical Zone showing simple cases which are
counterexamples to the claims by Nemati and Holloway, and has identified errors
in their proof. See also the comments by English in the discussion on those
posts.
I suspect however that in spite of the errors N&H have muddled through to the right mathematical conclusion. But in any case Joe Felsenstein thinks the question is probably irrelevant because:
...the real
question is not whether the randomness deficiency is conserved, or whether the
shortness of the program is conserved, but whether that implies that an
evolutionary process in a population of genomes is thereby somehow constrained.
Do the theorems about ASC somehow show us that ordinary evolutionary processes
cannot achieve high levels of adaptation?
As a rule the de facto IDists have two motives: Firstly they want to stop evolution in its tracks because it is classified by them as a banal "natural" process and secondly they want to place the origins of complex adaptive organisms in the mysteries of mind which I suspect they believe to be a mysterious incomputable process. In a sense they seek a return to the mystique of vitalism.
But like Joe Felsenstein I find the question of the conservation of ASC, although for different reasons, irrelevant: At the moment it's looking to me as though ASC is both trivial and irrelevant for my purposes and simply leaves us with the same old questions. In the final analysis we find that information is not absolutely conserved because the so called "conservation of information" is not logically obliged but is a probabilistic result brought about by assuming a parallel processing paradigm; more about that in part 3. Yes, there's clearly a lot of good will and expertise in the de facto ID movement and they are a lot better in their attitudes than the didactarian Genesis literalists. But although I'm a Christian there's no chance of me joining de facto ID's Christian following. Many Christians will be wowed by their work and become followers, but count me well and truly out of anything like that! The church needs independent minds, not followers.