Showing posts with label Bad Theology. Show all posts
Showing posts with label Bad Theology. Show all posts

Tuesday, August 05, 2025

Classic Dualism


Courtesy of the Faraday Institute

I'm part of a Facebook group called Evangelicals for Evolutionary Creation. This is not to say that I've committed myself to standard Evolutionary thinking, but I feel that this group are worthy thinkers to keep an eye on. However, somebody put the following comment on their FB feed....

So I’m getting toward the end of Origins by the Haarsmas. A question arises, if abiogenesis is true, how does this not prove that life can happen without God? This kind of concerns me and it seems to be an open question in evolutionary creationism.

I believe that "Haarsmas" is a reference to Deborah Haarsma, the current president of Biologos, the Christian evolutionary creation organisation. I didn't comment on this statement as the Evolutionary Creation people are more than capable of critiquing such a breathtakingly naive perspective, a perspective with widespread appeal among both Christians and atheists. On this view it's a binary choice: "Either God did it or evolution did it"

I've no doubt said something like the following many times before: Since the enlightenment Western science has merely shown us that the cosmos is sufficiently organized for us to form succinct mathematical statements describing its dynamics. As many Christians fully understand, those descriptions in and of themselves only tell us about the "how?" of the cosmos and not the "why?" - but the "why?" is only a meaningful question if one first accepts that sentience, intelligence and purpose are a priori features of existence.

 If  anything this strange mathematical descriptive elegance only compounds the enigma of the cosmos and tells us little about absolute origins; that isthe ultimate gap, a gap that descriptive science is logically incapable of filling and if pressed simply leaves us with an elegant-descriptions-all-the-way-down regress. In fact, since we have no logically obliging reason for the continued existence of the contingencies of our cosmic reality that ultimate gap is everywhere and everywhen. 

And yet the dualistic view expressed by the above quote is the common default: That is "either God did it or cosmic processes did it"; the underlying assumption of this perspective is that somehow the enigma of cosmic organization has a logical self sufficiency which at best only leaves room for the God of deism or at worst no God at all. Such a perspective might have its origins in the early enlightenment/industrial era when it started to become much clearer that mechanisms (such as a steam regulator & automata) could be developed which meant that machines looked after their own running. The popularist conclusion was that the cosmos must be that kind of mechanism. Such mechanisms appeared not to need any prayerful ritualistic support or mystical input of any kind to continue. On this perspective sacredness seems to have been purged from what was now thought of as a self sustaining profane cosmos. 

But the realization that such mechanisms were so startingly sophisticated enough to beg the question of their design seems to have been lost on many people: One such person in our modern era is (atheist?) theologian Don Cupitt of the Sea of Faith movement. Also, blowhard atheist Richard Carrier is of this ilk. Carrier is so convinced by the sophistry of his flawed view of probability and randomness that he believes probability to be logically sufficient to fill in the God-gap.  And yet Carrier succeeded in identifying that our cosmic context lacks some logically self-sufficient kernel, although Carrier's erroneous concept of probability doesn't provide that kernel. 


***

It is surely ironic that the self same virtuoso cosmic organization which for some fills in the God-gap actually intensifies the nagging enigma of the absolute origins question; the contingent particularity of that organization is amazing. In fact as I have shown, evolution itself (if it has occurred) is effectively creationism on steroids.  And yet it is the underlying dualism of God vs evolution that much of the North America Intelligent Design movement (NAID) trades on. They will deny it of course, but whenever they open their mouths it is easy to see that they are exploiting the popularist God-of-the-gaps "Intelligence vs blind natural forces" dichotomy. To attack standard evolution on the scientific basis that the evidence is insufficient is one thing but to attack it on the basis of a half-cocked dualist philosophy is quite another - and I put it to the NAID community that although they affect to claim theirs is a scientific dispute their ulterior reasoning is in fact based on the popular appeal of their philosophical dualism, whatever they might claim. That appeal, however, is understandable I suppose because the above quote from a Facebook page is in fact the tip of a huge market iceberg of popularist thinking which the NAID's dichotomized explanations address and by which they make their money, trade and continue in mutual backslapping. For more on NAID see here, here and here.



NOTE: Luskin's God-of-the-Gaps paradigm

As I've made it clear before I don't think much of NAID theorist Casey Luskin's competence as an apologist for Intelligent Design. This post on Evolution News, which describes Luskin's views, cements his reputation as a God-or-of-the-Gaps apologist.  As I've said above I have no intellectual commitment to standard evolutionary theory, but what is clear, evolution or no evolution, one cannot get away from the question of intelligent design. That Luskin is so anti-evolution, a priori, is evidence that he still thinks subliminally in dualist and atheist categories in so far as he believes it to be  a choice between "blind natural forces vs intelligent design"..... where he interprets evolution atheistically in terms of "blind natural forces". Ergo, Luskin is a God-of-the-Gaps apologist whatever he claims. 

Sunday, May 04, 2025

Creation, Probability and Something for Nothing? Part V

 Let's Carry on Carriering Part V


This is my continuing critique of an article by commercial historian and unquenchable blowhard Richard Carrier. In his article Richard believes he has used probability calculus to show that "No god [is] needed" to create a universe. Well, in this instance there is no need for me to argue either for or against atheism; for the purposes of this post it is sufficient for me to show that Richard's misunderstanding and mishandling of probability and randomness hamstrings his polemic completely.  In Part II I pointed out where his argument comes off the rails and from that point on he constructs a teetering house of cards. 

The other parts of this series can be found here....

Quantum Non-Linearity: Let's Carry on Carriering Part I

Quantum Non-Linearity: Let's Carry on Carriering Part II

Quantum Non-Linearity: Let's Carry on Carriering Part III

Quantum Non-Linearity: Let's Carry on Carriering Part IV

On the whole Richard started his article well. In the first part of this series we saw Richard defining what he referred to as Nothing; note the capitalized N. Richard tells us that this kind of Nothing is what you are left with when all mere logical contingencies have been removed and one is left with a bare minimum of logical truisms, truisms which can't be removed without logical contradiction. I had no problems with this proposal. I also agreed that many of the classical "proofs" for God's existence are very dubious to say the least.  But I noted that Richard said nothing about the actual content of this exotic and mysterious placeholder he calls "Nothing" and I went on to say that this omission allows theism to slip in by the back door. Richard might have attempted to lock and bolt the front door but he's left the back door wide open. However, for my current purposes there is no need for me here to smuggle in God using "back door theism" because my focus is on his foundational logical errors, errors which bring his house of cards crashing down, never mind that he's actually failed to even lock the front door.

Let me finish this opening section with this: As I might have said before, theism, particularly Christian theism, is at the very least a mythological world view which for me is the abductive narrative making a whole lot of retrospective sense of an otherwise very perplexing and meaningless world. Moreover, it provides compelling insights into the human predicament; for me personally it is a successful "Weltenschauung"  (world-view) which is actually more than mythology; it is mythology++. However, we must concede that world-views attempt to encompass and synthesize a very wide field of proprietary experience and unique personal histories and therefore Worldview analysis is a rather subjective and contentious business on which the agreement theorem hits the rocks.

Although I would recommend Christianity to atheists even if they are to regard it as only a compelling mythological world-view, I nevertheless respect and understand their perspective given the cosmic context which has developed in our consciousness since the enlightenment  ...although I have little sympathy with the kind of flawed and triumphalist polemic we get from Richard Carrier. 

***


RICHARD: Probability of Something from Nothing. Proposition 8 holds that “when there is Nothing,” then “every possible number of universes that can appear has an equal probability of occurring,” and Proposition 9 holds that therefore “the probability of Nothing remaining nothing equals the ratio of one to n, where n is the largest logically possible number of universes that can appear.” We can therefore calculate limits on how likely it is that something would exist now, given the assumption that once upon a time there was Nothing—not a god or quantum fluctuation or anything else, but literally in fact Nothing.


MY COMMENT: I've already covered propositions 8 and 9 in part IV but I'll outline again Richard's two main embarrassments here. 

In the above Richard has assumed that if he is given a probability this implies he has in his hands an objective source capable of randomly creating outcomes. This is an error on at least two counts as we will see. I can, however, accept  this:

Proposition 8 holds that “when there is Nothing,” then “every possible number of universes that can appear has an equal probability of occurring,”

But then this doesn't follow:

Proposition 9 holds that therefore “the probability of Nothing remaining nothing equals the ratio of one to n, where n is the largest logically possible number of universes that can appear.”

As I remarked in the previous parts, probability is an intelligible concept only if one first assumes the existence of an observer who is able to form an enumerated (or denumerated) ratio of what are believed to be logical contingencies. That is, probability presupposes the existence of a self-aware observer cognitively sophisticated enough to express information in terms of Laplace's classical probability quotient. For example, in proposition 8 we really haven't got a clue as what this mysterious object or entity called Nothing is likely to create, if anything at all. Therefore Richard is right in suggesting that in the absence of any further information “every possible number of universes that can appear has an equal probability of occurring,” Well, as I know Richard himself realizes it's going to be quite an intellectual challenge denumerating all the possible universes in order to return a Laplacian probability ratio here, but the principle entailed is apparently coherent and comprehensible; for as far is our quantified ignorance is concerned we are left with a ratio of 1 to n where n is clearly some huge number. 

But between the two propositions 8 & 9 there is a serious logical fallacy. The probability ratio of 1 to n pertains to an observer's subjective information level and not some potential creation dynamic which pertains to Nothing. Moreover, this probability is conditioned on our complete lack of knowledge as to which logical contingency of the n possibilities which Nothing, so called, will "choose" to create. Those apparent possibilities includes any number of n universes where n actually includes the "null" universe; that is, the universe with nothing in it. On this basis Nothing, so called, sounds like a pretty sophisticated object; don't you think Richard? (Arguing that with Nothing there is nothing to stop it creating something can be turned on its head: Viz: There is nothing to stop Nothing remaining as Nothing; this kind of polemic is just informal verbal sophistry!)

Well, we know that Nothing didn't create the null universe so on the basis of these informational conditions the probability of the creation of a particular universe,  which I shall call Up, can be symbolized by:

Prob(Up/E) = 1/(n-1)

....where E is the information condition that a universe is known to exist, although at this stage we don't know which particular universe exists. Now, assuming we know which universe of the n-1 possible universes has been created (because we can look out and observe it) then n = 1. Therefore on these updated informational conditions... 

Prob(Pu/E) = 1/1 = 1 !!!

...which only goes illustrate just how conditional probabilities are upon observer information. For the very reason that probability is a measure of observer ignorance it is an entirely incoherent move to then try to use it to impute a creative dynamic to an object such as Nothing of which we know very little.  Probability in and of itself is not a creative dynamic; rather it concerns our knowledge or lack of knowledge about the object in question. 

What is very clear is that whatever Prob(Pu/E) works out at we have no logical right to infer that Nothing will consequently generate universes at random....along such lines, I suspect, Richard is thinking. A quantified probability does not imply randomness, although the reverse is not true ....the patterns of randomness often entail probability because these patterns are so algorithmically complex that they are from a human angle, practically unknowable in succinct algorithmic terms. Therefore random outcomes can usually only be expressed in terms of probabilities (Unless we've got a book of randomly generated numbers which we've memorised!).

***


RICHARD: Assume that only the numbers 0 to 100 exist, and therefore 100 is the largest logically possible number of universes that can appear. In that event, the probability that Nothing would remain Nothing (the probability of ex nihilo nihil) is 100 to 1 against. There being 101 numbers, including the zero, i.e. the continuation of nothing being the condition of there arising zero universes, and only one of those numbers constitutes remaining nothing, then there are 100 times more ways for Nothing to become something, than to remain nothing. And when there is Nothing, there is nothing to stop any of those other ways from materializing, nor does anything exist to cause any one of those ways to be more likely than any of the others.

It is therefore logically necessarily the case that, if we assume there was ever Nothing, the probability of ex nihilo nihil is less than 1%.

Of course, 100 is not the highest number. Go looking, you won’t find a highest number. It is in fact logically necessarily the case that no highest number exists. So really, the probability of ex nihilo nihil is literally infinitesimal—infinity to one against. One might complain that we don’t really know what that means. But it doesn’t matter, because we can graph the probability of ex nihilo nihil by method of exhaustion, and thus see that the probability vanishes to some value unimaginably close to zero.

MY COMMENT: Here we go again. Richard has projected his otherwise coherent probability examples onto the cosmos as if they entail a creation dynamic. This is very apparent in these sentences.....

In that event, the probability that Nothing would remain Nothing (the probability of ex nihilo nihil) is 100 to 1 against.

It is therefore logically necessarily the case that, if we assume there was ever Nothing, the probability of ex nihilo nihil is less than 1%.

So, according to Richard he can project what is in fact a purely subjective measure of information (i.e. probability) onto this mysterious big deal he calls Nothing and then come up with the conclusion that Nothing will very likely create a universe! This does not follow because those probabilities reside in his observer's head; those Laplacian ratios don't reside "out there". 

***


RICHARD: We therefore do not need God to explain why there is something rather than nothing. There may also be something rather than nothing simply “because there just is.” There isn’t any actual basis for assuming “nothing” is the natural state of anything, or that there has ever really been nothing. We could honestly just as fairly ask why should there be nothing rather than something. No God is needed here. But even if we are to presume that there ever once was Nothing, we still need no further explanation of why then there is something. Because that there would be something is then as certain an outcome as makes all odds.

Formally:

·         If Proposition 1, then Proposition 2

·         If Proposition 2, then Proposition 3

·         If Proposition 3, then Proposition 4

·         If Proposition 4 and Proposition 1, then Propositions 5 and 7

·         If Proposition 5 and Proposition 1, then Proposition 6

·         If Propositions 5, 6, and 7, then Proposition 8

·         If Proposition 8, then Proposition 9

·         If Proposition 9 and Proposition 1, then the probability that Nothing would produce something is incalculably close to 100% and therefore effectively certain to occur.


   MY COMMENT:  Well OK let's run with the idea that "We do not need God to explain why there is something rather than nothing", whatever Richard means by "God" in this context. But according to Richard we do need two other things:


    Firstly, of course, we need this enigmatic entity called "Nothing". But all we know about Nothing is that it is the irreducible logical truism left when all logical contingencies/possibilities have been eliminated; according to this account trying conceive absolutely nothing is in fact a contradiction (I suspect that's true). That word "Nothing" however, is a place holder for what may well be a very exotic truism capable of creating who knows what.  Fair enough Richard, this point of yours has a good feel about it as far as I'm concerned.


    But secondly, Richard is asking us to accept his very logically dodgy maneuver involving the projection of subjective probabilities onto Nothing and then assuming that this is sufficient to give Nothing a dynamic with creative potential. Well yes, Nothing may well be sophisticated enough to be creative (in fact as a Christian I believe this entity is creative) but to suppose that human ignorance somehow projects that creative potential onto Nothing is not the way to argue the case! It's a bogus argument. And I say it yet again; probabilities pertain to a measure of observer ignorance and don't create anything.


  But if I'm understanding him aright Richard does have a fallback position which I can respect: He says above "There may also be something rather than nothing simply “because there just is.”. That is very reminiscent of this post of mine on Galen Strawson where I quote Strawson suggesting that the universe "just is"; that is, it's just brute fact and to hell with abductive mythologies like Christianity which bring sense, purpose and meaning. If you simply find it impossible to believe that some kind of personal God has created our kind of universe with its all too off-putting human predicaments and suffering, then I have sympathy with that response. But I'm not sympathetic with Richard's cack-handed logic pushed through with self-recommending claims about his intellectual authority. Self-praise is no recommendation.


****




    As we've seen in the previous parts of this series the logic of Richard's list of connected propositions is OK up until about proposition 5 when his analysis really goes off the rails as he hits the question of probability and randomness. In the above Richard talks about not needing God. But whatever he means by God in this context, the creative potential he allocates to Nothing is startling to say the least and it looks suspiciously god-like. In particular if Nothing's creative powers extend to the capability of generating patterns of randomness that in itself is a pretty god-like trait: First and foremost random patterns are contingent - they have no logical obligation and there is no known logical contradiction entailed by their non-existence. Secondly, if we are talking algorithmic generation, randomness of varying degrees entails either very long and complex  algorithms or very large generation times or a combination of both.  In the ideal mathematical limit of pure randomness one or both of these two features extend to infinity.


    If Richard is trying to tell us that the creative source he calls Nothing is in fact a generator of genuinely random patterns then I think we are clear what Richard Carrier's god looks like. 



 

****



.....to be continued...? 


    There are still some remaining paragraphs to consider in Richard Carrier's post but as far as the thrust of my criticism is concerned his closing passages will entail just more of the same kind of critique; that is, criticism of his fallacies revolving round his misconceptions about probability and randomness. So, I may or may not finish the series depending on how I feel and whether I consider it to be time well spent....I'll see.



   CAVEAT


   Disagreeing with Richard Carrier on the above issues should not be taken as a sign that I identify as being a member of some polar opposite tribe. For example, it is likely that I agree with him on many issues particularly when he is criticizing the hard-right. 

Wednesday, January 29, 2025

Bill Dembski's Information Conservation Thesis Falls Over


NAID's Stephen Meyer interviewed by Unwoke Right Wing Republican Dan Crenshaw. 
Evidence that NAID has become part of the  politicized culture war


I see that William "Bill" Dembski has done a post on Evolution News on the subject of the "Conservation of Information". The article is for the most part an interesting history of that phrase and goes to show that "information" has a number of meanings dependent on the discipline where it is being used, with Bill Dembski having his own proprietary concerns tied up with his support of the North American Intelligent Design (NAID) community.  See here for the article:

Conservation of Information: History of an Idea | Evolution News

Bill's particular information interest seems to lie with the so called "No Free Lunch Theorems". These theorems were about the mathematical limits on computer algorithms purposed to search for (and/or generate) configurations with properties of particular interest. Bill's focus on the "No Free Lunch Theorems" is bound up with the NAID community's challenge to standard evolution, a process which they see as a threat to their self-inflicted XOR creation dichotomy; viz: either "God Intelligence did it" XOR "Blind unguided natural forces did it" . 

But Bill gets full marks for spotting the relevance of these theorems to evolutionary theory: Evolution does have at least some features isomorphic with computer searches; in particular these theorems do throw some light on evolution's "search", reject and select mechanism which locks in organic configurations. So, the least I can say is that Bill's interest in the "No free lunch theorems" is based on what looks to be a potentially fruitful avenue of study. However, although it is true that the "No free lunch theorems" reveal interesting mathematical limits on computer searches, as we will see Bill has gone too far in trying co-opt these theorems for his concept of information conservation; in fact, to the contrary I would say that these theorems prove that Bill is wrong about the conservation of information.


                                                                                             ***

We can get a gut feeling for the No free lunch theorems with the impressionistic & informal mathematical analysis in this post. 

(Note: I arrived at similar conclusions in these two essays...

GeneratingComplexity2c.pdf - Google Drive

CreatingInformation.pdf - Google Drive 

These essays are more formal and cover the subject in more detail)

***

We imagine that we have a set of computer programs executing in parallel with the intention of finding out if at some point in their computations they generate examples of a particular class of configuration. These configurations are to be found somewhere in an absolutely huge domain of possible configurations that I shall call and which numbers D members, where D is extremely large.  It is a well known fact that most of the members of D will likely be highly disordered

A computer "search" starts with its initial algorithmic information  usually coded in the form of a character string or configuration S of length S. This configurational string contains the information informing the computing machinery how to generate a sequence of configurations C1C2,.....,Cn,.... etc. The software creates this sequence by modifying the current configuration Cn in order to create the next configuration Cn+1. A crucial operational characteristic of algorithms is that they are capable of making if-then-else type decisions which means that the modifications leading to Cn+1 will be dependent on configurational features found in Cn. It is this decisional feature of executed algorithms which gives them their search, reject and select characternot unlike evolution. This means that their trajectory through configuration space is often very difficult to predict without actually executing the algorithm. This is because the conditional decision-making of algorithms means that we can't predict what direction an algorithm will take at any one point in the computation until the conditions it is responding to have actually been generated by the algorithm. The concept of computational irreducibility is relevant here. 

In his article Bill is careful to describe the components of search algorithms, components which give them their search, reject & select character. But for my purposes we can simplify things by ignoring these components and only give cognizance to the fact that an algorithm takes its computer along what is possibly a non-linear trajectory in configuration space. We can also drop Bill's talk of the algorithm aiming for a specified target and then stopping since in general an algorithm can go on indefinitely moving through configuration space endlessly generating configurations as does conventional evolution. All we need to be concerned about here is the potentiality for algorithms to generate a class of configs of interest in a  "time" T where T is measured in algorithmic steps. 

                                                ***

If we have an algorithm with a string length of S then the maximum number of possible algorithms that can be constructed given this string length is Awhere A is the number of characters in the character set used to write the algorithm.

We now imagine that we have these possible As algorithms all executing in parallel for T steps. It then follows that the maximum number of configurations C which potentially can be generated by these possible algorithms of length S will be no greater than the limits set by the following relationship....

C <= As X  T

 Relation 1.0

...where C is the number of configurations that can be created in time T if the set of algorithms are run in parallel and assuming that a) T is measured in algorithmic steps and that b) the computing machinery is only capable of one step at a time and generates one configuration per step per algorithm.  

If the class of configurations we are interested in exist somewhere in a huge domain D consisting of D configurations and where for practically realistic execution times T:

                                     D >>> C

Relation 2.0

...then the relatively meager number of configurations our algorithm set can generate in realistic times like T are a drop in the ocean when compared to the size of the set of configurational possibilities that comprise D.  If relationship 2.0 holds then it is clear that given realistic times T, our "searches" will be unable to access the vast majority of configurations in D

With the above relationships in mind no free lunch starts to make some sense: If we are looking for algorithms which generate members of a particular class of configuration of interest (e.g. organic-like configurations) then for the algorithmic search to have a chance of succeeding in a reasonable time we require one of the following two conditions to be true...

1. Assuming that such exists then an algorithm of reasonable length S has to be found which is able to generate the targeted class of configurations within a reasonable time T.  However, if relationship 2.0 holds then it is clear that this option will not work for the vast majority of configurations in D.

2.  The alternative is that we invalidate relationship 2.0 by either a) allowing the algorithms of length S to be large enough so that A~ D, or b) allowing the execution time T of these algorithms to be sufficiently large so that T D,  or c) allowing that T and As when combined invalidate relationship  2.0. 

***

So, with the foregoing in mind we can see that if an algorithm is to generate a stipulated class of solution in domain D in a reasonable time T it either a) has to be logically possible to code the algorithmic solution in a starting string S of reasonable length S or b) we have to code the required information into a long string S of length S such that As ~ D. 

In case a) both S and T are of a practically reasonable magnitude from which it follows that given relationship 1.0 then little of the domain D  can be generated by such algorithms and therefore the majority of configurations that could possibly be designated as of interest in D (especially if they are complex disordered configurations) can not be found by these case-a algorithms. In case b) the starting string S, in terms of the number of possible algorithms that can be constructed, is commensurate with the size of D and therefore could possibly generate configurations of stipulated interest in a reasonable time. 

Therefore it follows that if we are restricted to relatively short algorithm strings of length S then these algorithms will only have a chance of reaching the overwhelming majority of configurations in D after very long execution times. If our configurations of designated interest are in this long execution time region in D these configurations will demand large values of T to generate. Long execution time algorithms, absent of any helpful starting strings which provide "short cut" information are I think what Bill calls "blind search algorithms". That emphasis on the word "blind" is a loaded misnomer which appeals to the NAID community for reasons which I hope will become clear. 

***


For Bill this what no free lunch means to him...

Because no-free-lunch theorems assert that average performance of certain classes of search algorithms remain constant at the level of blind search, these theorems have very much a conservation of information feel in the sense that conservation is strictly maintained and not merely that conservation is the best that can be achieved, with loss of conservation also a possibility

It's true that unless primed with the right initial information by far and away the majority of algorithms will reach most targets of an arbitrarily designated interest only after very long execution times involving laborious searching.....ergo, upfront information that lengthens S is needed to shorten the search; in fact this is always true by definition if we are wanting to generate configurations of interest that are also random configurations. 

So, the following is my interpretation of what Bill means by the conservation of information; namely, that to get the stipulated class of garbage out in reasonable time you have to put the right garbage in from the outset. The "garbage in" is a starting string S of sufficient length to tip the algorithm off as to where to look. The alternative is to go for searches with very long execution times T. So, paraphrasing Bill, we might say that his conservation of information can be expressed by this caricatured equation:

Gin = Gout

Relation 3.0

....where Gin represents some kind of informational measure of the "garbage" going in and Gout is the informational measure of the garbage coming out of the computation. But the following is the crucial point which as we will see invalidates Bill's conservation of information: Although relationship 3.0 gives Bill his conservation of information feel, it is an approximation which only applies to reasonable execution times.....it neglects the fact that the execution of an algorithm does create information if only slowly. That Bill has overlooked the fact that what he calls "blind searches" nevertheless slowly generate information becomes apparent from the following analysis.

***

If we take the log of relation 1.0 we get:


                                                         Log (C) <= S Log (A) + Log(T)

relation 4.0

The value C is the number of configurations that Aalgorithms will generate in time T and this will be less than or equal to the righthand side of the above relation. The probability of one these C configurations being chosen at random will be 1/C. Converting this probability to a Shannon information value, I, gives:

I = - Log (1/C) = Log (C)

relation 5.0

Therefore substituting I into 4.0 gives:

<= S Log (A) + Log(T)

relation 6.0

Incorporating Log (A) into a generalized measure of string length, S gives....

<= S + Log(T)

relation 7.0

From this relationship we can see that parallel algorithms do have the potential to generate Shannon Information with time T, and the information is not just incorporated from the outset in a string of length S. However, we do notice that because the information generated by execution time is the log function of T, that information is generated very slowly. This is what Bill has overlooked: What he derisively refers to as a "blind search" (sic) actually has the potential to generate information, if slowly. Bill's view is expressed further in the following quote from his article (With my emphases and with my insertions in red).....

With the no-free-lunch theorems, something is clearly being conserved [No, wrong] in that performance of different search algorithms, when averaged across the range of feedback information, is constant and equivalent to performance of blind search.[Log(T) is the "blind search" component] The question then arises how no free lunch relates to the consistent claim in the earlier conservation-of-information literature about output information not exceeding input information. In fact, the connection is straightforward. The only reason to average performance of algorithms across feedback information is if we don’t have any domain-specific information to help us find the target in question.[The "domain-specific" information is implicit in the string S of length in relation 7.0] 

Consequently, no free lunch tells us that without such domain-specific information, we have no special input information to improve the search, and thus no way to achieve output information that exceeds the capacities of blind search. When it comes to search, blind search is always the lowest common denominator — any search under consideration must always be able to do at least as good as blind search because we can always execute a blind search.[Oh no we can't Bill, at least not practically quickly enough under the current technology; we still await the technological sophistication to implement the expanding parallelism needed for "blind search" to be effective, the holy grail of computing. "Blind search" is a much more sophisticated idea than our Bill and his NAID mates are making out!] With no free lunch, it is blind search as input and blind search as output. The averaging of feedback information treated as input acts as a leveler, ensuring parity between information input and output. No free lunch preserves strict conservation [Tough, not true!] precisely because it sets the bar so low at blind search.

By distilling its findings into a single fundamental relation of probability theory, this work provides a definitive, fully developed, general formulation of the Law of Conservation of Information, showing that information that facilitates search cannot magically materialize out of nothing but instead must be derived from pre-existing sources.[False; information derives not just from S, but can also creep in from an algorithm's  execution time T ]

Blind search, blind search, blind search, blind, blind, blind,...... the repeated mantra of NAID culture which with its subliminal gnosto-dualism repeatedly refers to the resources of God's creation as a place of "blind natural forces". Sometimes you will also hear them talk about "unguided natural forces". But in one sense I would maintain the cosmos is far from "natural", and this is evidenced by the  sense of wonder its highly contingent form engenders among theists and atheists alike, all of whom can advance no logically obliging reason as to its highly organised configuration (accept perhaps Richard Carrier whose arrogance on this score would do Zaphod Beeblebrox  proud)

Bill's last sentence above is clearly false, as false can be; he's overlooked the slowly growing information term in relation 7.0. Information is not conserved during a search because the so-called "blind search" (sic) term is slowly, almost undetectably creating information. There is therefore no "strict conservation of information" (sic). That the so-called "blind search" (sic) is being understated by Bill and the NAID culture he represents becomes very apparent as soon as we realize that equation 7.0 has been derived on the assumption that we are using parallel processing; that is, a processing paradigm where the number of processors doing the computation is constant. But if we start thinking about the exponentials of a process which utilizes expanding parallelism the second term on the righthand side of 7.0 has the potential to become linear in T and therefore highly significant. This is why so much effort and cash is being put into quantum computing; Quantum computers clearly create information at a much more rapid rate and it is the monumental resources being invested in this line of cutting edge research which gives the lie to Bill's contention that information is conserved during computation and that somehow "blind search" rates as a primitive last resort. 


                                                                       ***

As far as the big evolution question is concerned I regard this matter with studied detachment. God as the sovereign author of the cosmic story could introduce information into the cosmic configuration generator using either or both terms in relation 7.0; in particular if unlike primitive humanity at our current technological juncture God has at his finger tips the power of expanding parallelism to crack the so called blind search problem the second term on the righthand side of 7.0 has the potential to become significant. Accordingly, I reject NAID's wrongly identified "blind natural forces" category when those forces are in fact highly sophisticated because they are in the hands of Omniscient Omnipotence. The trouble is that the NAID community have heavily invested in an anti-evolution culture and it looks like they've past the point of no return, such is their huge social and tribal identification with anti-evolutionism. Ironically, even if bog-standard evolution is true (along with features like junk DNA) we are still faced with the Intelligent Design question. As for myself I have no indispensable intellectual investment in either the evolutionist or anti-evolutionist positions.

                                                    ***


As I have remarked so many times before, what motivates NAID (& YEC) culture's aversion to the idea that information can be created by so-called "blind natural forces" is this culture's a priori anti-evolution stance. Underlying this stance, I propose, is a subliminal gnosto-dualist mindset, and this mindset in this subliminal form afflicts Western societies across the board, from atheism to authoritarian & touchy feely expressions of Christianity; in fact Western religious expression in general. But that's another story. (See for example my series on atheist Don Cupitt - a series yet to be completed)

What's compounded my problem with NAID & YEC nowadays is their embrace of unwoke political culture, a culture which automatically puts them at odds with the academic establishment. I'll grant that that establishment and its supporters have often (or at least sometimes) subjected outsiders (like Bill for example) to verbal abuse and cancellation (e.g. consider Richard Dawkins & the Four Horseman, RationalWiki etc.). This has help urge them to find friends among the North American far-right academia hating tribes and embrace some of their political attitudes (See here). As I personally by and large support academia (but see here) it is therefore likely that I too would be lumped together by the NAID & YEC communities as a "woke" sympathizer, even though I reject any idea that the problems of society can be finally fixed by social engineering initiated centrally, least of all by Marxist social engineering. But then I'm also a strong objector to far-right libertarian social engineering which seeks a society regulated purely by a community's use of their purses (and then be pray to the chaotic non-linearities of market economics and power grabbing by plutocratic crony capitalists). In today's panicked and polarized milieu the far-right would see even a constitutional Royalist like myself who is also in favour of a regulated market economy, as at best a diluted "socialist" and at worst a far-left extremist, ripe for the woke-sin-bin!



NOTE: An article on "Conservation of Information" has recently popped up on Panda's Thumb. See here: Conservation of arguments