(This post is still undergoing correction and enhancement)
Propeller technology was always going to have a problem
breaking the sound barrier whereas jet technology didn't.
But having said that let me say that my own position occupies a space somewhere between, on the one hand, the IDists who don't abide by evolution because it uses what they believe to be creatively inferior so-called "natural forces" and on the other the atheists who are determined to show that evolution is a cinch and more-or-less in the bag. The fact is some evolutionists (although this may not apply to Felsenstein) do not fully appreciate the information barrier that evolution actually presents (See Larry Moran for example). In fact the implicit parallel computational paradigm found in the standard concept of evolution is not going to be up to the task unless it starts with a huge burden of up-front information. This is where I believe IDists like William Dembski's work is relevant and valid as I have said before, although Dembski and his ID interpreters have inferred that Dembski's work also implies some kind of "Conservation of Information", whatever that means.
The consequence of IDists over interpreting evolution's information barrier in terms of an absolute barrier is that they then conclude they have in their hands proof that "natural forces" cannot generate life and that some extra "magic" is needed to inject information into natural history in order to arrive at bio-configurations. They identify that extra magic not as the "supernatural" (that would look too "unscientific"!) but instead as "Intelligent agency"
To a Christian like myself, however, this IDist philosophy raises questions: For although there is a clear creation vs God dualism in Christian theology I find the implicit dualism within creation implied by de facto IDism problematic: If an omniscient and omnipotent God has created so-called "natural forces" then it would seem to be quite within his capabilities to provision creation in such a way that natural history could conceivably include the "natural" emergence of living configurations. The "magic" may already be there for all we know!
Moreover, it is clear that human intelligence, which is one of the processes of the created order, can "create information" and I don't think the IDists would deny that. And yet as far as we know human intelligence appears not to transcend God's created providence. IDists, however, are likely to attempt to get round this observation by trying to maintain that human intelligence has a mysterious and extraordinary ingredient which allows it to create information - for example, I have seen IDists use Roger Penrose's idea that human intelligence involves incomputable processes as the mysterious super-duper magic needed to create information. Penrose's ideas, if correct, imply that human intelligence (and presumably the intelligence that IDists claim on occasions injects information into natural history) cannot be described algorithmically. If this line of argument can be maintained then it would justify the IDists dualism. But this IDist paradigm can be challenged. For a start I believe Penrose was wrong in his argument about the incomputablility of human thinking; see here and here. Yes, human thinking may have some extraordinary ingredient of which we are unware, but it may have been part of the covenant of creation all along: I don't, however, believe it to be an incomputable process.
The consequence of IDists over interpreting evolution's information barrier in terms of an absolute barrier is that they then conclude they have in their hands proof that "natural forces" cannot generate life and that some extra "magic" is needed to inject information into natural history in order to arrive at bio-configurations. They identify that extra magic not as the "supernatural" (that would look too "unscientific"!) but instead as "Intelligent agency"
To a Christian like myself, however, this IDist philosophy raises questions: For although there is a clear creation vs God dualism in Christian theology I find the implicit dualism within creation implied by de facto IDism problematic: If an omniscient and omnipotent God has created so-called "natural forces" then it would seem to be quite within his capabilities to provision creation in such a way that natural history could conceivably include the "natural" emergence of living configurations. The "magic" may already be there for all we know!
Moreover, it is clear that human intelligence, which is one of the processes of the created order, can "create information" and I don't think the IDists would deny that. And yet as far as we know human intelligence appears not to transcend God's created providence. IDists, however, are likely to attempt to get round this observation by trying to maintain that human intelligence has a mysterious and extraordinary ingredient which allows it to create information - for example, I have seen IDists use Roger Penrose's idea that human intelligence involves incomputable processes as the mysterious super-duper magic needed to create information. Penrose's ideas, if correct, imply that human intelligence (and presumably the intelligence that IDists claim on occasions injects information into natural history) cannot be described algorithmically. If this line of argument can be maintained then it would justify the IDists dualism. But this IDist paradigm can be challenged. For a start I believe Penrose was wrong in his argument about the incomputablility of human thinking; see here and here. Yes, human thinking may have some extraordinary ingredient of which we are unware, but it may have been part of the covenant of creation all along: I don't, however, believe it to be an incomputable process.
This post (and the next post) is my take on the "information barrier" debate between evolutionists and IDists. I will not be going into the minutiae of how the IDists or their antagonists arrive at their conclusions but I will be looking at the conclusions themselves and comparing them with my own conclusions based on three projects of mine which throw light on the subject. Viz:
1. Disorder & Randomness. This project defines randomness algorithmically.
2. The Melancholia I project: This project discusses the possibility of information creation and/or destruction.
3. The Thinknet project: This project defines the term "specified information" teleologically and notes the parallels with quantum mechanics.
Summarising my position: I can go some of the way with Dembski and the IDists in that there is an issue with standard evolution in so far as it demands some mysterious up-front information in order to work, but there is no such thing as "the conservation of information"; information can be destroyed and created; but in any case the arguments used by IDists are unsafe because there is more than one understanding of what "information" actually is. It is likely that the IDist's "conservation of information" may result because we are most familiar with linear and parallel computing resources, a computing paradigm that has difficulty breaking through the information barrier. This contrasts, as we shall see, with the exponential resources of expanding parallelism and the presence of these resources exorcises the dualists ghost in the machine which haunts IDist philosophy. On the other hand some atheists are unaware that there is an information barrier (probably not true of Joe Felsenstein - see here and here) and are unlikely to see Dembski's work as laying down a serious challenge.
As usual I don't dogmatically push my own ideas from a polemical partisan soap box seeking conversions to my case. For me this is a personal quest, an adventure and journey through the spaces of the mind which quite likely may not lead anywhere. The journey, as I often say, is better than the destination.
Summarising my position: I can go some of the way with Dembski and the IDists in that there is an issue with standard evolution in so far as it demands some mysterious up-front information in order to work, but there is no such thing as "the conservation of information"; information can be destroyed and created; but in any case the arguments used by IDists are unsafe because there is more than one understanding of what "information" actually is. It is likely that the IDist's "conservation of information" may result because we are most familiar with linear and parallel computing resources, a computing paradigm that has difficulty breaking through the information barrier. This contrasts, as we shall see, with the exponential resources of expanding parallelism and the presence of these resources exorcises the dualists ghost in the machine which haunts IDist philosophy. On the other hand some atheists are unaware that there is an information barrier (probably not true of Joe Felsenstein - see here and here) and are unlikely to see Dembski's work as laying down a serious challenge.
As usual I don't dogmatically push my own ideas from a polemical partisan soap box seeking conversions to my case. For me this is a personal quest, an adventure and journey through the spaces of the mind which quite likely may not lead anywhere. The journey, as I often say, is better than the destination.
***
In this post I want to introduce the information barrier via William Dembski's work. In the video of a lecture I embedded in my blog post here Dembski introduces his concept of the "conservation of information" via the following simple relationship:
Probability of life being generated by a given physical regime = r < p/q
1.0
....where p is the unconditional probability of life and where q is the conditional probability of life given a physical regime whose probability is r.
I give an elementary proof of this theorem in the said blog post. Relationship 1.0 will tell us what we are looking for if we rearrange it a bit. Viz:
q < p/r
2.0
Now, we expect p, which is the unconditional probability of life, to be very small; that is, if we were using a computational method which involved choosing molecular configurations at random, then it is fairly obvious that the complex organisation of configurations capable of self replication and self perpetuation will, by virtue of their rarity in configuration space, have an absolutely tiny probability. If we want to fix this problem of improbability and get a realistic chance of life emerging then conceivably we could contrive some physical regime whereby the chance of life coming about with a better than a random selection computation is q, where q >> p and where q is the conditional probability of life; that is the probability of life given the context of the physical regime. But if q is to be realistic then from 2.0 it follows that we must have r ~ p; that is, the only way of increasing the conditional probability of life is to first select a highly improbable physical regime. As Dembski points out the improbability has now been shifted onto the probability of the physical regime. Dembski's point becomes clearer if we convert our probabilities to formal "information" values as follows.
Now, the so-called information function I(p), as used by Dembski, is defined for a probability of p using:
The so-called "conservation of information" becomes clearer if we first take the natural log of expression 2.0, followed by applying definition 3.0 and then with a bit of rearranging we arrive at:
Those looking for "natural explanations" don't expect the emergence of life from a "natural" physical regime to be a surprise but rather a very "natural" outcome given that regime. This is tantamount to requiring that I(q), the "surprisal" value of the conditional emergence of life, to be relatively low. Trouble is, because I(p) is so high, it follows from 4.0 that if I(q) is to be low then I(r), the information value of the physical regime, is necessarily very high. Relationship 4.0 being effectively a "zero sum game" expression means that something has to soak up the information; either I(q) or I(r) or both. We are therefore always destined to be surprised by the extreme contingency nature flings at us from somewhere within equation 4.0. So, at first sight we seem to have an information conservation law expressed by 4.0.
Relationship 4.0 is in fact borne out by a closer look at conventional evolution, a process which somehow generates structures that in an absolute sense are highly improbable. Joe Felsenstein himself implicitly acknowledges equation 4.0 in his suggestion that the information for life is embodied in the physical regime we call the "laws of physics". (see here and here). If so then from 4.0 we infer that these laws must be a very improbable state of affairs and therefore of very high information. Evolution as it is currently conceived requires that this information expresses itself in what I call the "spongeam" about which I say more in this blog post. (Actually, my opinion is that the spongeam doesn't exist and that some other provision applies - more about that another time)
Equation 4.0 is beguiling: It seems to come out of some simple and rigorous mathematics. But it embeds an assumption. That assumption is that I(p) has a very high value because we assume from the outset a computation method which involves a serial "throwing of the die" as it were, a method which is going to require many, many conventional computational serial steps and therefore has a prohibitively high time-complexity as far as practice is concerned*2. But then if we have dice rather than just a die we can then have more than one trial at a time and the chances of creating life by chance alone increase, although it is clear that there would have to be an enormous number of parallel trials to return a significant probability of generating living configurations in this way. This multi-trials technique is effectively the brute force resort of the multiverse extremists. It is a fairly trivial conclusion that increasing the number of parallel trials has the effect of "destroying" information in that increasing trial numbers increases the probability of an outcome and so its information value goes down. Clearly in the face of huge numbers of parallel trials an outcome, no matter how oddly contingent it might be, is no longer a "surprise" in such a "multiverse". Not surprisingly this concept appeals to anti-theists who feel more at home in a Godless multiverse.
In the paper linked to in this post of the Melancholia I project I looked into the effect of increasing parallel trial numbers and in particular I considered the subject of expanding parallelism in the generation of outcomes. It's a fairly obvious conclusion that increasing parallel trials increases the probability of a result! But it also goes to show, perhaps a little less obviously, that information isn't conserved in such a context; in fact in this context information is effectively destroyed by the increasing trial numbers and in particular by expanding parallelism. This, sort of thing is likely to go down well with anti-theists because the "surprisal" value ( i.e. -ln(p) ) associated with outcomes is eroded, although of course anti-theists may still be surprised that such a multiplying system exists in the first place!
As I contend in this post multiverse ideas which posit a sufficient number of trials needed to destroy our surprise at the universe's amazing organised contingencies leaves us looking out on a cosmos whose aspect is empty, purposeless and anthropologically meaningless. And yes, I say it again; this kind of universe suits the anti-theists down to the ground; it seems to be the sort of universe they eminently prefer. But in spite of that there is something to take away from these multiverse ideas, in particular the idea of expanding parallelism, hinted at by quantum mechanics, and which is evidence of the potential availability of huge computational resources. Given the concept of omniscience & omnipotence implicit in the Christian notion of God, positing the existence of these huge computational resources doesn't seem so outrageous. But in a Christian context the computational potential of expanding parallelism has, I suggest, purpose and teleology and is in fact evidence of a declarative seek, reject and select computational paradigm.
Although the -ln(p) concept of information used by Dembski succeeds in quantifying some of our intuitions about information it does have some notable inadequacies and it is these inadequacies which take us on to the subject of Felsenstein's Panda's Thumb post, namely Algorithmic Information theory. Let me explain...
Ironically I called the paper that explores the subject of expanding parallelism "Creating Information" rather Destroying Information. This is because my Melancholia I project is really about a concept of information very different to -ln(p). The need for this different concept of information becomes apparent from the following considerations. Although the function -ln(p) adequately quantifies our level of surprisal at outcomes this definition of information is not good at conveying the idea of configurational information. Take this example: The chances of finding a hydrogen atom at a designated point in the high vacuum of space is very small and therefore we have a high information event here should it happen. But it is a very elementary event, an event which only conveys one bit of information: 'yes' or 'no' depending on whether a hydrogen atom appears or not. The trouble is that a one bit configuration is hardly what one would like to call a lot of information! Therefore we need something that is better at conveying quantity of information. From the function -ln(p) it follows that a one bit configuration can "contain" the same amount of information as a large n-bit configuration. This doesn't feel very intuitive, particularly if we are dealing with potentially large and complex configurations; it seems intuitively disagreeable to classify a complex configuration as possibly having the same level of information as a one bit configuration.*1
Algorithmic information theory attempts to measure the information content of a configuration via its computational complexity and this returns a measure of information which agrees with our intuitive ideas about the quantity of information found in a configuration, something that -In(p) doesn't necessarily convey. However, using this concept of information we find that once again the IDists think they have stumbled on another information barrier that scuppers any creation of life by those inferior but dreaded "natural forces"! In the next post this contention will take me into the subject of my book on Disorder and randomness which also deals with algorithmic information theory. Once again we will find that expanding parallelism bursts through the information barrier. Although Joe Felsenstein and his buddies certainly won't need any help from me to engage the IDists I will in fact be using my own concept of Algorithmic Information to look into the IDist claims because it provides me with something immediately to hand.
There is one more ingredient that needs to be added to the mix to complete the picture of information creation and this is an ingredient which will certainly not be to the taste of anti-theists whose world view is one of a purposeless cosmos without teleology. I'm talking of my speculative proposal that the cosmos isn't working to some meaningless and mindless procedural process that just goes on and on and leads to nowhere but rather is operating some kind of purposeful declarative computation that uses expanding parallelism to seek, reject and select outcomes. It is in this context that the notion of specified information suddenly jumps into sharp focus; in fact I touch on this subject toward the end of the paper I've linked to in part 4 of my Thinknet project. (See section 11).
Now, the so-called information function I(p), as used by Dembski, is defined for a probability of p using:
I(p) = - ln (p)
3.0
...where ln is the natural logarithm function. From this definition it is clear that for very small values of p function 3.0 is going to return a large value of I; that is a large "information" value.The so-called "conservation of information" becomes clearer if we first take the natural log of expression 2.0, followed by applying definition 3.0 and then with a bit of rearranging we arrive at:
I(q) + I(r) = I(p)
4.0
Those looking for "natural explanations" don't expect the emergence of life from a "natural" physical regime to be a surprise but rather a very "natural" outcome given that regime. This is tantamount to requiring that I(q), the "surprisal" value of the conditional emergence of life, to be relatively low. Trouble is, because I(p) is so high, it follows from 4.0 that if I(q) is to be low then I(r), the information value of the physical regime, is necessarily very high. Relationship 4.0 being effectively a "zero sum game" expression means that something has to soak up the information; either I(q) or I(r) or both. We are therefore always destined to be surprised by the extreme contingency nature flings at us from somewhere within equation 4.0. So, at first sight we seem to have an information conservation law expressed by 4.0.
Relationship 4.0 is in fact borne out by a closer look at conventional evolution, a process which somehow generates structures that in an absolute sense are highly improbable. Joe Felsenstein himself implicitly acknowledges equation 4.0 in his suggestion that the information for life is embodied in the physical regime we call the "laws of physics". (see here and here). If so then from 4.0 we infer that these laws must be a very improbable state of affairs and therefore of very high information. Evolution as it is currently conceived requires that this information expresses itself in what I call the "spongeam" about which I say more in this blog post. (Actually, my opinion is that the spongeam doesn't exist and that some other provision applies - more about that another time)
Equation 4.0 is beguiling: It seems to come out of some simple and rigorous mathematics. But it embeds an assumption. That assumption is that I(p) has a very high value because we assume from the outset a computation method which involves a serial "throwing of the die" as it were, a method which is going to require many, many conventional computational serial steps and therefore has a prohibitively high time-complexity as far as practice is concerned*2. But then if we have dice rather than just a die we can then have more than one trial at a time and the chances of creating life by chance alone increase, although it is clear that there would have to be an enormous number of parallel trials to return a significant probability of generating living configurations in this way. This multi-trials technique is effectively the brute force resort of the multiverse extremists. It is a fairly trivial conclusion that increasing the number of parallel trials has the effect of "destroying" information in that increasing trial numbers increases the probability of an outcome and so its information value goes down. Clearly in the face of huge numbers of parallel trials an outcome, no matter how oddly contingent it might be, is no longer a "surprise" in such a "multiverse". Not surprisingly this concept appeals to anti-theists who feel more at home in a Godless multiverse.
In the paper linked to in this post of the Melancholia I project I looked into the effect of increasing parallel trial numbers and in particular I considered the subject of expanding parallelism in the generation of outcomes. It's a fairly obvious conclusion that increasing parallel trials increases the probability of a result! But it also goes to show, perhaps a little less obviously, that information isn't conserved in such a context; in fact in this context information is effectively destroyed by the increasing trial numbers and in particular by expanding parallelism. This, sort of thing is likely to go down well with anti-theists because the "surprisal" value ( i.e. -ln(p) ) associated with outcomes is eroded, although of course anti-theists may still be surprised that such a multiplying system exists in the first place!
As I contend in this post multiverse ideas which posit a sufficient number of trials needed to destroy our surprise at the universe's amazing organised contingencies leaves us looking out on a cosmos whose aspect is empty, purposeless and anthropologically meaningless. And yes, I say it again; this kind of universe suits the anti-theists down to the ground; it seems to be the sort of universe they eminently prefer. But in spite of that there is something to take away from these multiverse ideas, in particular the idea of expanding parallelism, hinted at by quantum mechanics, and which is evidence of the potential availability of huge computational resources. Given the concept of omniscience & omnipotence implicit in the Christian notion of God, positing the existence of these huge computational resources doesn't seem so outrageous. But in a Christian context the computational potential of expanding parallelism has, I suggest, purpose and teleology and is in fact evidence of a declarative seek, reject and select computational paradigm.
***
Although the -ln(p) concept of information used by Dembski succeeds in quantifying some of our intuitions about information it does have some notable inadequacies and it is these inadequacies which take us on to the subject of Felsenstein's Panda's Thumb post, namely Algorithmic Information theory. Let me explain...
Ironically I called the paper that explores the subject of expanding parallelism "Creating Information" rather Destroying Information. This is because my Melancholia I project is really about a concept of information very different to -ln(p). The need for this different concept of information becomes apparent from the following considerations. Although the function -ln(p) adequately quantifies our level of surprisal at outcomes this definition of information is not good at conveying the idea of configurational information. Take this example: The chances of finding a hydrogen atom at a designated point in the high vacuum of space is very small and therefore we have a high information event here should it happen. But it is a very elementary event, an event which only conveys one bit of information: 'yes' or 'no' depending on whether a hydrogen atom appears or not. The trouble is that a one bit configuration is hardly what one would like to call a lot of information! Therefore we need something that is better at conveying quantity of information. From the function -ln(p) it follows that a one bit configuration can "contain" the same amount of information as a large n-bit configuration. This doesn't feel very intuitive, particularly if we are dealing with potentially large and complex configurations; it seems intuitively disagreeable to classify a complex configuration as possibly having the same level of information as a one bit configuration.*1
Algorithmic information theory attempts to measure the information content of a configuration via its computational complexity and this returns a measure of information which agrees with our intuitive ideas about the quantity of information found in a configuration, something that -In(p) doesn't necessarily convey. However, using this concept of information we find that once again the IDists think they have stumbled on another information barrier that scuppers any creation of life by those inferior but dreaded "natural forces"! In the next post this contention will take me into the subject of my book on Disorder and randomness which also deals with algorithmic information theory. Once again we will find that expanding parallelism bursts through the information barrier. Although Joe Felsenstein and his buddies certainly won't need any help from me to engage the IDists I will in fact be using my own concept of Algorithmic Information to look into the IDist claims because it provides me with something immediately to hand.
There is one more ingredient that needs to be added to the mix to complete the picture of information creation and this is an ingredient which will certainly not be to the taste of anti-theists whose world view is one of a purposeless cosmos without teleology. I'm talking of my speculative proposal that the cosmos isn't working to some meaningless and mindless procedural process that just goes on and on and leads to nowhere but rather is operating some kind of purposeful declarative computation that uses expanding parallelism to seek, reject and select outcomes. It is in this context that the notion of specified information suddenly jumps into sharp focus; in fact I touch on this subject toward the end of the paper I've linked to in part 4 of my Thinknet project. (See section 11).
I have to confess that if the cosmos is using a purposeful declarative computational paradigm that makes use of expanding parallelism I'm far from having all the details: All I have currently is an understanding of the effect that expanding parallelism has on computational complexity and the metaphor of my Thinknet project which seems to have parallels with quantum mechanics; quantum mechanics looks suspiciously like a seek, reject and select declarative computation which taps into the resources of expanding parallelism. Contrasting alternatives to my conjectures are that we either have the anti-theist's meaningless procedural multiverse or the primitive notion that God did indeed simply utter authoritarian magic words and via brute omnipotence was able to speak stuff into existence! The latter seems very unlikely theologically speaking: David Bump, who is a (nice) young earthist Christian I am currently corresponding with, has kindly and respectfully supplied me with a long document of his thoughts on what it means to be a Christian who sees God as creating things via spoken words "as is" about 6000 years ago. Frankly I doubt it! As I have analysed David's arguments I have found that for me all this leads to huge theological problems and unless I turn to fideism these problems don't look as though they are going to go away! I will in due course be publishing my response to David.
Footnotes:
Footnotes:
*1 A bit stream can carry a lot of information in the sense of definition 3.0 because its information value is a product of many probabilities and this may equate to a very small probability and therefore a correspondingly high information content. But the trouble with -ln(p) is that a one bit configuration could be equally as information laden. Another problem with -ln(p) is that once a configuration becomes a "happened event" and is recorded, all its information is lost. This is because probability is a measure of subjective knowledge and therefore once a large configuration becomes known ground, no matter how complex, it loses all its information.... a sure sign that "information" in this subjective sense is easily destroyed and therefore not conserved.
*2 There is also another assumption here (or perhaps it's a confusion rather than an assumption) that probability and randomness are identical concepts - they are not: See my paper on Disorder and Randomness. Dembski uses the principle of indifference in assigning equal probabilities to outcomes for which no prior knowledge exists as to why one outcome should be preferred over the other and hence the outcomes from this subjective point of view have equal probability. This procedure is correct in my opinion; but two outcomes which subjectively speaking have an equal probability are not necessarily equally random; randomness is an objective quality deriving from confrontational disorder.
*2 There is also another assumption here (or perhaps it's a confusion rather than an assumption) that probability and randomness are identical concepts - they are not: See my paper on Disorder and Randomness. Dembski uses the principle of indifference in assigning equal probabilities to outcomes for which no prior knowledge exists as to why one outcome should be preferred over the other and hence the outcomes from this subjective point of view have equal probability. This procedure is correct in my opinion; but two outcomes which subjectively speaking have an equal probability are not necessarily equally random; randomness is an objective quality deriving from confrontational disorder.