(This post is still undergoing correction and enhancement)
![]() |
NAID's Stephen Meyer interviewed by Unwoke Right Wing Republican Dan Crenshaw. Evidence that NAID has become part of the politicized culture war |
I see that William "Bill" Dembski has done a post on Evolution News on the subject of the "Conservation of Information". The article is for the most part an interesting history of that phrase and goes to show that "information" has a number of meanings dependent on the discipline where it is being used, with Bill Dembski having his own proprietary concerns tied up with his support of the North American Intelligent Design (NAID) community. See here for the article:
Conservation of Information: History of an Idea | Evolution News
Bill's particular information interest seems to lie with the so called "No Free Lunch Theorems". These theorems were about the mathematical limits on computer algorithms purposed to search for configurations with properties of particular interest. Bill's focus on the "No Free Lunch Theorems" is bound up with the NAID community's challenge to standard evolution, a process which they see as a threat to their self-inflicted XOR creation dichotomy; Viz: either "God Intelligence did it" XOR "Blind unguided natural forces did it" .
But Bill gets full marks for spotting the relevance of these theorems to evolutionary theory: Evolution does have at least some features isomorphic with computer searches; in particular these theorems do throw some light on evolution's "search", reject and select mechanism which locks in organic configurations. So, the least I can say is that Bill's interest in the "No free lunch theorems" is based on what looks to be a potentially fruitful avenue of study. However, although it is true that the "No free lunch theorems" reveal interesting mathematical limits on computer searches, as we will see Bill has gone too far in trying co-opt these theorems for his concept of information conservation; in fact, to the contrary I would say that these theorems prove that Bill is wrong about the conservation of information.
***
We can get a gut feeling for the No free lunch theorems with the impressionistic & informal mathematical analysis in this post.
(Note: I arrived at similar conclusions in these two essays...
GeneratingComplexity2c.pdf - Google Drive
CreatingInformation.pdf - Google Drive
These essays are more formal and cover the subject in more detail)
***
We imagine that we have a set of computer programs executing in parallel with the intention of finding out if at some point in their computations they generate examples of a particular class of configuration. These configurations are to be found somewhere in an absolutely huge domain of possible configurations that I shall call D and which numbers D members, where D is extremely large. It is a well known fact that most of the members of D will likely be highly disordered.
A computer "search" starts with its initial algorithmic information usually coded in the form of a character string or configuration S of length S. This configurational string contains the information informing the computing machinery how to generate a sequence of configurations C1, C2,.....,Cn,.... etc. The software creates this sequence by modifying the current configuration Cn in order to create the next configuration Cn+1. A crucial operational characteristic of algorithms is that they are capable of making if-then-else type decisions which means that the modifications leading to Cn+1 will be dependent on configurational features found in Cn. It is this decisional feature of executed algorithms which gives them their search, reject and select character, not unlike evolution. This means that their trajectory through configuration space is often very difficult to predict without actually executing the algorithm. This is because the conditional decision-making of algorithms means that we can't predict what direction an algorithm will take at any one point in the computation until the conditions it is responding to have actually been generated by the algorithm. The concept of computational irreducibility is relevant here.
In his article Bill is careful to describe the components of search algorithms, components which give them their search, reject & select character. But for my purposes we can simplify things by ignoring these components and only give cognizance to the fact that an algorithm takes its computer along what is possibly a non-linear trajectory in configuration space. We can also drop Bill's talk of the algorithm aiming for a specified target and then stopping since in general an algorithm can go on indefinitely moving through configuration space endlessly generating configurations as does conventional evolution. All we need to be concerned about here is the potentiality for algorithms to generate a class of configs of interest in a "time" T where T is measured in algorithmic steps.
***
If we have an algorithm with a string length of S then the maximum number of possible algorithms that can be constructed given this string length is As where A is the number of characters in the character set used to write the algorithm.
We now imagine that we have these possible As algorithms all executing in parallel for T steps. It then follows that the maximum number of configurations C which potentially can be generated by these possible algorithms of length S will be no greater than the limits set by the following relationship....
C <= As X T
Relation 1.0
...where C is the number of configurations that can be created in time T if the set of algorithms are run in parallel and assuming that a) T is measured in algorithmic steps and that b) the computing machinery is only capable of one step at a time and generates one configuration per step per algorithm.
If the class of configurations we are interested in exist somewhere in a huge domain D consisting of D configurations and where for practically realistic execution times T:
D >>> C
Relation 2.0
...then the relatively meager number of configurations our algorithm set can generate in realistic times like T are a drop in the ocean when compared to the size of the set of the configurational possibilities that comprise D. If the relationship 2.0 holds then it is clear that given realistic times T, our "searches" will be unable to access the vast majority of configurations in D.
With the above relationships in mind no free lunch starts to make some sense: If we are looking for algorithms which generate members of a particular class of configuration of interest (e.g. organic-like configurations) then for the algorithmic search to have a chance of succeeding in a reasonable time we require one of the following two conditions to be true...
1. Assuming that such exists then an algorithm of reasonable length S has to be found which is able to generate the targeted class of configurations within a reasonable time T. However, if relationship 2.0 holds then it is clear that this option will not work for the vast majority of configurations in D.
2. The alternative is that we invalidate relationship 2.0 by either a) allowing the algorithms of length S to be large enough so that As ~ D, or b) allowing the execution time T of these algorithms to be sufficiently large so that T ~ D, or c) allowing that T and As when combined invalidate relationship 2.0.
***
So, with the foregoing in mind we can see that if an algorithm is to generate a stipulated class of solution in domain D in a reasonable time T it either a) has to be logically possible to code the algorithmic solution in a starting string S of reasonable length S or b) we have to code the required information into a long string S of length S such that As ~ D.
In case a) both S and T are of a practically reasonable magnitude from which it follows that given relationship 1.0 then little of the domain D can be generated by such algorithms and therefore the majority of configurations that could possibly be designated as of interest in D (especially if they are complex disordered configurations) can not be found by these algorithms. In case b) the starting string S, in terms of the number of possible algorithms that can be constructed, is commensurate with the size of D.
Therefore it follows that if we are restricted to relatively short algorithm strings of length S then these algorithms will only have a chance of reaching the overwhelming majority of configurations in D after very long execution times. If our configurations of designated interest are in this long execution time region in D these configurations will take a very long to generate. Long execution time algorithms, absent of any helpful starting strings which provide "short cut" information are I think what Bill calls "blind search algorithms". That emphasis on the word "blind" is a loaded misnomer which appeals to the NAID community for reasons which I hope will become clear.
***
For Bill this what no free lunch means to him...
Because no-free-lunch theorems assert that average performance of certain classes of search algorithms remain constant at the level of blind search, these theorems have very much a conservation of information feel in the sense that conservation is strictly maintained and not merely that conservation is the best that can be achieved, with loss of conservation also a possibility
It's true that unless primed with the right initial information by far and away the majority of algorithms will reach most targets that can be designated as of interest only after very long execution times involving laborious searching.....ergo, upfront information that lengthens S is needed to shorten the search; in fact this is always true by definition if we are wanting to generate random configurations.
So, the following is my interpretation of what Bill means by the conservation of information; namely, that to get the stipulated class of garbage out in reasonable time you have to put the right garbage in from the outset. The "garbage in" is a starting string S of sufficient length to tip the algorithm off as to where to look. The alternative is to go for searches with very long execution times T. So, paraphrasing Bill, we might say that his conservation of information can be expressed by this caricatured equation:
Gin = Gout
Relation 3.0
Where Gin represents some kind of informational measure of the "garbage" going in and Gout is the informational measure of the garbage coming out of the computation. But the following is the crucial point which as we will see invalidates Bill's conservation of information: Relationship 3.0 gives Bill his conservation of information feel but it is an approximation which only applies to reasonable execution times.....it neglects the fact that the execution of an algorithm does create information if only slowly. That Bill has overlooked the fact that what he calls "blind searches" nevertheless slowly generate information becomes apparent from the following analysis.
***
If we take the log of relation 1.0 we get:
Log (C) <= S Log (A) + Log(T)
relation 4.0
The value C is the number of configurations that As algorithms will generate in time T and this will be less than or equal to the righthand side of the above relation. The probability of one these C configurations being chosen at random will be 1/C. Converting this probability to a Shannon information value, I, gives:
I = - Log (1/C) = Log (C)
relation 5.0
Therefore substituting I into 4.0 gives:
I <= S Log (A) + Log(T)
relation 6.0
Incorporating Log (A) into a generalized measure of string length, S gives....
I <= S + Log(T)
relation 7.0
From this relationship we can see that parallel algorithms do have the potential to generate Shannon information with time T, and the information is not just incorporated from the outset in a string of length S. However, we do notice that because the information generated by execution time is the log function of T, that information is generated very slowly. This is what Bill has overlooked: What he derisively refers to as a "blind search" (sic) actually has the potential to generate information, if very slowly. Bill's view is expressed further in the following quote from his article (With my emphases and insertions in red).....
With the no-free-lunch theorems, something is clearly being conserved in that performance of different search algorithms, when averaged across the range of feedback information, is constant and equivalent to performance of blind search.[Log(T) is the "blind search" component] The question then arises how no free lunch relates to the consistent claim in the earlier conservation-of-information literature about output information not exceeding input information. In fact, the connection is straightforward. The only reason to average performance of algorithms across feedback information is if we don’t have any domain-specific information to help us find the target in question.[The "domain-specific" information is implicit in the string S of length S in relation 7.0]
Consequently, no free lunch tells us that without such domain-specific information, we have no special input information to improve the search, and thus no way to achieve output information that exceeds the capacities of blind search. When it comes to search, blind search is always the lowest common denominator — any search under consideration must always be able to do at least as good as blind search because we can always execute a blind search.[Oh no we can't Bill, at least not practically quickly enough under the current technology; we still await the technological sophistication to implement the expanding parallelism needed for "blind search" to be effective, the holy grail of computing. "Blind search" is a much more sophisticated idea than our Bill and his NAID mates are making out!] With no free lunch, it is blind search as input and blind search as output. The averaging of feedback information treated as input acts as a leveler, ensuring parity between information input and output. No free lunch preserves strict conservation [Tough, not true!] precisely because it sets the bar so low at blind search.
By distilling its findings into a single fundamental relation of probability theory, this work provides a definitive, fully developed, general formulation of the Law of Conservation of Information, showing that information that facilitates search cannot magically materialize out of nothing but instead must be derived from pre-existing sources.[False; information derives not just from S, but can also creep in from an algorithm's execution time T ]
Blind search, blind search, blind search, blind, blind, blind,...... the repeated mantra of NAID culture which with its subliminal gnosto-dualism repeatedly refers to the resources of God's creation as a place of "blind natural forces". Sometimes you will also hear them talk about "unguided natural forces".
Bill's last sentence above is clearly false, as false can be; he's overlooked the slowly growing information term in relation 7.0. Information is not conserved during a search because the so-called "blind search" (sic) term is slowly, almost undetectably creating information. There is therefore no "strict conservation of information" (sic). That the so-called "blind search" (sic) is being understated by Bill and the NAID culture he represents becomes very apparent as soon as we realize that equation 7.0 has been derived on the assumption that we are using parallel processing; that is, a processing paradigm where the number of processors doing the computation is constant. But if we start thinking about the exponentials of a process which utilizes expanding parallelism the second term on the righthand side of 7.0 has the potential to become linear in T and therefore highly significant. This is why so much effort and cash is being put into quantum computing; Quantum computers clearly create information at a much more rapid rate and it is the monumental resources being invested in this line of cutting edge research which gives the lie to Bill's contention that information is conserved during computation and that somehow "blind search" rates as a primitive last resort.
***
As far as the big evolution question is concerned I regard this matter with studied detachment. God as the sovereign author of the cosmic story could introduce information into the cosmic configuration generator using either or both terms in relation 7.0; in particular if unlike primitive humanity at our current technological juncture God has at his finger tips the power of expanding parallelism to crack the so called blind search problem the second term on the righthand side of 7.0 has the potential to become significant. Accordingly, I reject NAID's wrongly identified "blind natural forces" category when those forces are in fact highly sophisticated because they are in the hands of Omniscient Omnipotence. The trouble is that the NAID community have heavily invested in an anti-evolution culture and it looks like they've past the point of no return, such is their huge social and tribal identification with anti-evolutionism. Ironically, even if bog-standard evolution is true (along with features like junk DNA) we are still faced with the Intelligent Design question. As for myself I have no indispensable intellectual investment in either the evolutionist or anti-evolutionist positions.
***
As I have remarked so many times before, what motivates NAID (& YEC) culture's aversion to the idea that information can be created by so-called "blind natural forces" is this culture's a priori anti-evolution stance. Underlying this stance, I propose, is a subliminal gnosto-dualist mindset, and this mindset in this subliminal form afflicts Western societies across the board, from atheism to authoritarian & touchy feely expressions of Christianity; in fact Western religion in general. But that's another story. (See for example my series on atheist Don Cupitt - a series yet to be completed)
What's compounded my problem with NAID & YEC nowadays is their embrace of unwoke political culture, a culture which automatically puts them at odds with the academic establishment. I'll grant that that establishment and its supporters have often (or at least sometimes?) subjected outsiders (like Bill for example) to verbal abuse (e.g. consider Richard Dawkins & the Four Horseman, RationalWiki etc.) and this has help urge them to find friends among the North American far-right academia hating tribes and embrace some of their political attitudes (See here). As I personally by and large support academia (but see here) it is therefore likely that I too would be lumped together by the NAID & YEC communities as a "woke" sympathizer, even though I reject any idea that the problems of society can be finally fixed by centralized social engineering, least of all by Marxist social engineering. But then I'm also a strong objector to far-right libertarian social engineering which seeks a society regulated purely by a community's use of their purses (and then be pray to the chaotic non-linearities of market economics and power grabbing by plutocratic crony capitalists). In today's panicked and polarized milieu the far-right would see even a constitutional Royalist like myself who is also in favour of a regulated market economy, as at best a diluted "socialist" and at worst a far-left extremist, ripe for the woke-sin-bin!