Wednesday, January 29, 2025

Bill Dembski's Information Conservation Thesis Falls Over

(This post is still undergoing correction and enhancement)

NAID's Stephen Meyer interviewed by Unwoke Right Wing Republican Dan Crenshaw. 
Evidence that NAID has become part of the  politicized culture war


I see that William "Bill" Dembski has done a post on Evolution News on the subject of the "Conservation of Information". The article is for the most part an interesting history of that phrase and goes to show that "information" has a number of meanings dependent on the discipline where it is being used, with Bill Dembski having his own proprietary concerns tied up with his support of the North American Intelligent Design (NAID) community.  See here for the article:

Conservation of Information: History of an Idea | Evolution News

Bill's particular information interest seems to lie with the so called "No Free Lunch Theorems". These theorems were about the mathematical limits on computer algorithms purposed to search for configurations with properties of particular interest. Bill's focus on the "No Free Lunch Theorems" is bound up with the NAID community's challenge to standard evolution, a process which they see as a threat to their self-inflicted XOR creation dichotomy; Viz: either "God Intelligence did it" XOR "Blind unguided natural forces did it" . 

But Bill gets full marks for spotting the relevance of these theorems to evolutionary theory: Evolution does have at least some features isomorphic with computer searches; in particular these theorems do throw some light on evolution's "search", reject and select mechanism which locks in organic configurations. So, the least I can say is that Bill's interest in the "No free lunch theorems" is based on what looks to be a potentially fruitful avenue of study. However, although it is true that the "No free lunch theorems" reveal interesting mathematical limits on computer searches, as we will see Bill has gone too far in trying co-opt these theorems for his concept of information conservation; in fact, to the contrary I would say that these theorems prove that Bill is wrong about the conservation of information.


                                                                                             ***

We can get a gut feeling for the No free lunch theorems with the impressionistic & informal mathematical analysis in this post. 

(Note: I arrived at similar conclusions in these two essays...

GeneratingComplexity2c.pdf - Google Drive

CreatingInformation.pdf - Google Drive 

These essays are more formal and cover the subject in more detail)

***

We imagine that we have a set of computer programs executing in parallel with the intention of finding out if at some point in their computations they generate examples of a particular class of configuration. These configurations are to be found somewhere in an absolutely huge domain of possible configurations that I shall call and which numbers D members, where D is extremely large.  It is a well known fact that most of the members of D will likely be highly disordered

A computer "search" starts with its initial algorithmic information  usually coded in the form of a character string or configuration S of length S. This configurational string contains the information informing the computing machinery how to generate a sequence of configurations C1C2,.....,Cn,.... etc. The software creates this sequence by modifying the current configuration Cn in order to create the next configuration Cn+1. A crucial operational characteristic of algorithms is that they are capable of making if-then-else type decisions which means that the modifications leading to Cn+1 will be dependent on configurational features found in Cn. It is this decisional feature of executed algorithms which gives them their search, reject and select characternot unlike evolution. This means that their trajectory through configuration space is often very difficult to predict without actually executing the algorithm. This is because the conditional decision-making of algorithms means that we can't predict what direction an algorithm will take at any one point in the computation until the conditions it is responding to have actually been generated by the algorithm. The concept of computational irreducibility is relevant here. 

In his article Bill is careful to describe the components of search algorithms, components which give them their search, reject & select character. But for my purposes we can simplify things by ignoring these components and only give cognizance to the fact that an algorithm takes its computer along what is possibly a non-linear trajectory in configuration space. We can also drop Bill's talk of the algorithm aiming for a specified target and then stopping since in general an algorithm can go on indefinitely moving through configuration space endlessly generating configurations as does conventional evolution. All we need to be concerned about here is the potentiality for algorithms to generate a class of configs of interest in a  "time" T where T is measured in algorithmic steps. 

                                                ***

If we have an algorithm with a string length of S then the maximum number of possible algorithms that can be constructed given this string length is Awhere A is the number of characters in the character set used to write the algorithm.

We now imagine that we have these possible As algorithms all executing in parallel for T steps. It then follows that the maximum number of configurations C which potentially can be generated by these possible algorithms of length S will be no greater than the limits set by the following relationship....

C <= As X  T

 Relation 1.0

...where C is the number of configurations that can be created in time T if the set of algorithms are run in parallel and assuming that a) T is measured in algorithmic steps and that b) the computing machinery is only capable of one step at a time and generates one configuration per step per algorithm.  

If the class of configurations we are interested in exist somewhere in a huge domain D consisting of D configurations and where for practically realistic execution times T:

                                     D >>> C

Relation 2.0

...then the relatively meager number of configurations our algorithm set can generate in realistic times like T are a drop in the ocean when compared to the size of the set of the configurational possibilities that comprise D.  If the relationship 2.0 holds then it is clear that given realistic times T, our "searches" will be unable to access the vast majority of configurations in D

With the above relationships in mind no free lunch starts to make some sense: If we are looking for algorithms which generate members of a particular class of configuration of interest (e.g. organic-like configurations) then for the algorithmic search to have a chance of succeeding in a reasonable time we require one of the following two conditions to be true...

1. Assuming that such exists then an algorithm of reasonable length S has to be found which is able to generate the targeted class of configurations within a reasonable time T.  However, if relationship 2.0 holds then it is clear that this option will not work for the vast majority of configurations in D.

2.  The alternative is that we invalidate relationship 2.0 by either a) allowing the algorithms of length S to be large enough so that A~ D, or b) allowing the execution time T of these algorithms to be sufficiently large so that T D,  or c) allowing that T and As when combined invalidate relationship  2.0. 

***

So, with the foregoing in mind we can see that if an algorithm is to generate a stipulated class of solution in domain D in a reasonable time T it either a) has to be logically possible to code the algorithmic solution in a starting string S of reasonable length S or b) we have to code the required information into a long string S of length S such that As ~ D. 

In case a) both S and T are of a practically reasonable magnitude from which it follows that given relationship 1.0 then little of the domain D  can be generated by such algorithms and therefore the majority of configurations that could possibly be designated as of interest in D (especially if they are complex disordered configurations) can not be found by these algorithms. In case b) the starting string S, in terms of the number of possible algorithms that can be constructed, is commensurate with the size of D.

Therefore it follows that if we are restricted to relatively short algorithm strings of length S then these algorithms will only have a chance of reaching the overwhelming majority of configurations in D after very long execution times. If our configurations of designated interest are in this long execution time region in D these configurations will take a very long to generate. Long execution time algorithms, absent of any helpful starting strings which provide "short cut" information are I think what Bill calls "blind search algorithms". That emphasis on the word "blind" is a loaded misnomer which appeals to the NAID community for reasons which I hope will become clear. 

***


For Bill this what no free lunch means to him...

Because no-free-lunch theorems assert that average performance of certain classes of search algorithms remain constant at the level of blind search, these theorems have very much a conservation of information feel in the sense that conservation is strictly maintained and not merely that conservation is the best that can be achieved, with loss of conservation also a possibility

It's true that unless primed with the right initial information by far and away the majority of algorithms will reach most targets that can be designated as of interest only after very long execution times involving laborious searching.....ergo, upfront information that lengthens S is needed to shorten the search; in fact this is always true by definition if we are wanting to generate random configurations. 

So, the following is my interpretation of what Bill means by the conservation of information; namely, that to get the stipulated class of garbage out in reasonable time you have to put the right garbage in from the outset. The "garbage in" is a starting string S of sufficient length to tip the algorithm off as to where to look. The alternative is to go for searches with very long execution times T. So, paraphrasing Bill, we might say that his conservation of information can be expressed by this caricatured equation:

Gin = Gout

Relation 3.0

Where Gin represents some kind of informational measure of the "garbage" going in and Gout is the informational measure of the garbage coming out of the computation. But the following is the crucial point which as we will see invalidates Bill's conservation of information: Relationship 3.0 gives Bill his conservation of information feel but it is an approximation which only applies to reasonable execution times.....it neglects the fact that the execution of an algorithm does create information if only slowly. That Bill has overlooked the fact that what he calls "blind searches" nevertheless slowly generate information becomes apparent from the following analysis.

***

If we take the log of relation 1.0 we get:


                                                         Log (C) <= S Log (A) + Log(T)

relation 4.0

The value C is the number of configurations that Aalgorithms will generate in time T and this will be less than or equal to the righthand side of the above relation. The probability of one these C configurations being chosen at random will be 1/C. Converting this probability to a Shannon information value, I, gives:

I = - Log (1/C) = Log (C)

relation 5.0

Therefore substituting I into 4.0 gives:

<= S Log (A) + Log(T)

relation 6.0

Incorporating Log (A) into a generalized measure of string length, S gives....

<= S + Log(T)

relation 7.0

From this relationship we can see that parallel algorithms do have the potential to generate Shannon information with time T, and the information is not just incorporated from the outset in a string of length S. However, we do notice that because the information generated by execution time is the log function of T, that information is generated very slowly. This is what Bill has overlooked: What he derisively refers to as a "blind search" (sic) actually has the potential to generate information, if very slowly. Bill's view is expressed further in the following quote from his article (With my emphases and insertions in red).....

With the no-free-lunch theorems, something is clearly being conserved in that performance of different search algorithms, when averaged across the range of feedback information, is constant and equivalent to performance of blind search.[Log(T) is the "blind search" component] The question then arises how no free lunch relates to the consistent claim in the earlier conservation-of-information literature about output information not exceeding input information. In fact, the connection is straightforward. The only reason to average performance of algorithms across feedback information is if we don’t have any domain-specific information to help us find the target in question.[The "domain-specific" information is implicit in the string S of length in relation 7.0] 

Consequently, no free lunch tells us that without such domain-specific information, we have no special input information to improve the search, and thus no way to achieve output information that exceeds the capacities of blind search. When it comes to search, blind search is always the lowest common denominator — any search under consideration must always be able to do at least as good as blind search because we can always execute a blind search.[Oh no we can't Bill, at least not practically quickly enough under the current technology; we still await the technological sophistication to implement the expanding parallelism needed for "blind search" to be effective, the holy grail of computing. "Blind search" is a much more sophisticated idea than our Bill and his NAID mates are making out!] With no free lunch, it is blind search as input and blind search as output. The averaging of feedback information treated as input acts as a leveler, ensuring parity between information input and output. No free lunch preserves strict conservation [Tough, not true!] precisely because it sets the bar so low at blind search.

By distilling its findings into a single fundamental relation of probability theory, this work provides a definitive, fully developed, general formulation of the Law of Conservation of Information, showing that information that facilitates search cannot magically materialize out of nothing but instead must be derived from pre-existing sources.[False; information derives not just from S, but can also creep in from an algorithm's  execution time T ]

Blind search, blind search, blind search, blind, blind, blind,...... the repeated mantra of NAID culture which with its subliminal gnosto-dualism repeatedly refers to the resources of God's creation as a place of "blind natural forces". Sometimes you will also hear them talk about "unguided natural forces".

Bill's last sentence above is clearly false, as false can be; he's overlooked the slowly growing information term in relation 7.0. Information is not conserved during a search because the so-called "blind search" (sic) term is slowly, almost undetectably creating information. There is therefore no "strict conservation of information" (sic). That the so-called "blind search" (sic) is being understated by Bill and the NAID culture he represents becomes very apparent as soon as we realize that equation 7.0 has been derived on the assumption that we are using parallel processing; that is, a processing paradigm where the number of processors doing the computation is constant. But if we start thinking about the exponentials of a process which utilizes expanding parallelism the second term on the righthand side of 7.0 has the potential to become linear in T and therefore highly significant. This is why so much effort and cash is being put into quantum computing; Quantum computers clearly create information at a much more rapid rate and it is the monumental resources being invested in this line of cutting edge research which gives the lie to Bill's contention that information is conserved during computation and that somehow "blind search" rates as a primitive last resort. 


                                                                       ***

As far as the big evolution question is concerned I regard this matter with studied detachment. God as the sovereign author of the cosmic story could introduce information into the cosmic configuration generator using either or both terms in relation 7.0; in particular if unlike primitive humanity at our current technological juncture God has at his finger tips the power of expanding parallelism to crack the so called blind search problem the second term on the righthand side of 7.0 has the potential to become significant. Accordingly, I reject NAID's wrongly identified "blind natural forces" category when those forces are in fact highly sophisticated because they are in the hands of Omniscient Omnipotence. The trouble is that the NAID community have heavily invested in an anti-evolution culture and it looks like they've past the point of no return, such is their huge social and tribal identification with anti-evolutionism. Ironically, even if bog-standard evolution is true (along with features like junk DNA) we are still faced with the Intelligent Design question. As for myself I have no indispensable intellectual investment in either the evolutionist or anti-evolutionist positions.

                                                    ***


As I have remarked so many times before, what motivates NAID (& YEC) culture's aversion to the idea that information can be created by so-called "blind natural forces" is this culture's a priori anti-evolution stance. Underlying this stance, I propose, is a subliminal gnosto-dualist mindset, and this mindset in this subliminal form afflicts Western societies across the board, from atheism to authoritarian & touchy feely expressions of Christianity; in fact Western religion in general. But that's another story. (See for example my series on atheist Don Cupitt - a series yet to be completed)

What's compounded my problem with NAID & YEC nowadays is their embrace of unwoke political culture, a culture which automatically puts them at odds with the academic establishment. I'll grant that that establishment and its supporters have often (or at least sometimes?) subjected outsiders (like Bill for example) to verbal abuse (e.g. consider Richard Dawkins & the Four Horseman, RationalWiki etc.) and this has help urge them to find friends among the North American far-right academia hating tribes and embrace some of their political attitudes (See here). As I personally by and large support academia (but see here) it is therefore likely that I too would be lumped together by the NAID & YEC communities as a "woke" sympathizer, even though I reject any idea that the problems of society can be finally fixed by centralized social engineering, least of all by Marxist social engineering. But then I'm also a strong objector to far-right libertarian social engineering which seeks a society regulated purely by a community's use of their purses (and then be pray to the chaotic non-linearities of market economics and power grabbing by plutocratic crony capitalists). In today's panicked and polarized milieu the far-right would see even a constitutional Royalist like myself who is also in favour of a regulated market economy, as at best a diluted "socialist" and at worst a far-left extremist, ripe for the woke-sin-bin!



NOTE: An article on "Conservation of Information" has recently popped up on Panda's Thumb. See here: Conservation of arguments

Friday, January 24, 2025

Let's Carry on Carriering Part IV



In this post I continue analyzing a web post by self-recommending professional atheist Richard Carrier. 

The other parts of this series can be seen in the links below:

Quantum Non-Linearity: Let's Carry on Carriering Part I

Quantum Non-Linearity: Let's Carry on Carriering Part II

Quantum Non-Linearity: Let's Carry on Carriering Part III

Before going on with the rest of Richard's post, below I recap Richard's proposition 8 and comment on it again.

***


RICHARD: Proposition 8: If every logically possible thing that can happen to Nothing has an equal probability of occurring, then every logically possible number of universes that can appear has an equal probability of occurring.

This is logically entailed by the conjunction of Propositions 6 and 7. So again it cannot be denied without denying, again, Proposition 1.

MY COMMENT: In the above quote Richard is telling us that given this entity he calls Nothing we can infer that every logically possible universe that can arise from Nothing has an equal probability of occurring. As I have said in the previous parts, probability isn't a property of the object we are taking cognizance of (in this case the object is Nothing) but a function of the observer's level of knowledge about an object; the negated way of saying the same thing is that probability is a measure of the observer's ignorance. In the above, therefore, Richard is merely telling us that he has no idea what Nothing is capable of generating and that all logically feasible bets are therefore of equal probability; this equality correctly follows from the principle of equal a priori probabilities, a principle which applies to any observer who has no information which leads him/her to expect one bet over another. In this instance one of those bets includes whether or not Nothing will generate the high contingencies and complexities of random patterns. Because Richard is admitting that he knows 0.0% of nowt about Nothing these observer-based betting odds say nothing at all about what Nothing will actually generate. 

I will now continue analyzing Richard's post from where I left off in the last part (Part III).  From this point on his work is a teetering tower of endeavor with its foundations resting upon the implicit assumption that probability is a physical property which he also assumes must logically entail a totally random process; that is, a process which generates random patterns. 

***


RICHARD: And this is true regardless of the measure problem. There are lots of different ways you can slice up the “outcome” of a totally random process that’s unlimited in how much can happen—how much “stuff,” and in how many configurations, that can arise. But insofar as the “stuff” that pops out is connected to other stuff, it necessarily causally interacts with it, and that logically entails a single causally interacting “system,” which we can call a “universe” in a relevant sense. But when there is Nothing, nothing exists to make it even likely, much less ensure, that only one such “universe” will randomly materialize.

Of course, even within a single causally interacting “system,” (= a "universe") and thus within a single “universe,” it is not necessarily the case that every part of it will have the same contents and properties. Eternal inflation, for example, entails an initial chaotic universe will continue splitting off different bubble universes forever, and everyone will have different laws, contents, and properties, insofar as it’s possible to. And this is actually what we usually mean by “universe” now: one of those regions of the whole metaverse that shares a common fundamental physics (the same dimensionality of spacetime, the same fundamental constants, and the same causal history). Other regions may differ, e.g. if we fly far enough in space, maybe a trillion lightyears, we might start to enter a region of the universe where the laws and constants and shape and contents start to change.

MY COMMENT: Probabilities are defined in terms of ratios of sets of possibilities: The measure problem concerns the difficulty observers have in defining probabilities when trying to form ratios from ill-defined sets of possibilities, particularly potentially infinite sets of possibilities. If one is faced with sets of possibilities for which it is not easy to define clearcut size comparisons, then calculating probabilities (which are based on ratios of possibilities) becomes problematic. In this instance Richard is discussing the question of what class of possibilities constitutes what we would like to call a "universe" and how we measure the probability of a "universe" against the immense set of "all" possible universes. The comparison of these spectacularly vague and huge sets and the accompanying calculation of the relevant probabilities are sensitive to the methods of comparison.  (See here). 

At the start of the above quote Richard is telling us that his proposition 8 isn't affected by the measure problem; well, that may be true: For his purposes it is often enough to show that one set is  clearly much, much larger than another thus implying that the probability in question is all but zero and therefore its negated probability is all but unity. But for Richard to take us any further one first has to swallow his two seemingly unconscious assumptions: Viz:

1. That probabilities are an intrinsic property of an object when in fact they are an observer relative extrinsic property in so far as being a function of an observer's knowledge about an object. 

2. That the existence of a probability necessarily implies something capable of generating random patterns (certainly not true!).

Regarding assumption 1: Going over the point I have repeatedly made: The one-on-one element-by-element comparison between two sets needed to create ratios of possibilities and underwrite the calculation of the probability of a universe is only intelligible if we first assume, a priori, the existence of highly sophisticated third person observers for whom probability ratios (which are a measure of observer information level) are meaningful and interesting. Without the assumed existence of sufficiently cognitively sophisticated observers, probability is an unintelligible notion. Probability is not a property of something "out there" whether of universes or other; it is a measure of an observer's information about the object in question.

Regarding assumption 2:  That this seems to be some kind of habit of mind is more than hinted at when Richard says in the above quote about “the stuff” that pops out which presumably is the “outcome” of a totally random process. However, to be fair to Richard it is true that the term "probability" is often used as a metonym for randomness because the algorithmic intractability of random patterns makes them difficult to know and therefore random patterns very often entail a probability.  One of Richard's pratfalls is that I think he's conflated the use of the term "probability" as a metonym with the object it is frequently associated with (i.e. random patterns). As we will see, given a probability he wrongly infers that he has in his hands a random generator of universes; a non sequitur if there ever was one. 

***


RICHARD: However, we needn’t account for this in what follows. If it is the case—in other words, if universes in the broad sense (causally interacting systems) can themselves contain even more universes in the narrow sense (regions of a shared fundamental physics), then what follows, follows with even more certainty. Because then there are even more “universes” to make the point with. You will notice eventually how this simply makes the math even stronger, and gets us to the same conclusion with even greater force. Because all adding this does to the math, is increase how many universes a Nothing will inevitably randomly produce.

MY COMMENT:  If for the sake of argument we allow Richard's two assumptions above to slip past us then it's true that the measure problem doesn't affect his conclusion: Although we may be unable to come up with the rigorously correct ratios of possibilities it is often clear that the sets of possibilities Richard is comparing are obviously vastly different in size and so it is clear that the probabilities concerned are as near as can be to either 100% or 0%.

But the conclusions Richards draws from this exercise of probability calculation are based, once again on the falsehood I've emboldened at the end of the above quote: Viz: Richard thinks he's proved to himself that the logical truism he calls Nothing will inevitably randomly produce...“stuff” that pops out.

Richard's argument is that if we do at least know we are dealing with huge numbers of possible universes this is only going to add more grist to his mill by feeding his gluttony for immense numbers of possible outcomes. But unfortunately for Richard there is no wind or water to drive his mill: As we can see from his last sentence above, he's assuming that observer-defined probabilities necessarily entail a random pattern generator which he is hoping will drive his system of universe creation. Well, whatever complex logical necessities Nothing contains one thing is clear; the generation of random patterns is not known to logically follow from the Unknown and Mysterious logical necessity Richard calls "Nothing" and about which Richard can tell us very little. And again: The generation of random patterns doesn't follow as a logical necessity from observer defined probabilities whether those probabilities are calculated correctly of not.

Richard's misconceptions around probability and randomness are continuing to run through his thinking.  He needs to revisit his bad habits of mind about probability and randomness. 

***


RICHARD: The converse is also true. If it is somehow the case that there can’t be disconnected systems, that somehow it is logically impossible for Nothing to produce multiple “universes” in the broad sense, then it must necessarily be the case that it will produce, to the same probability, multiple universes in the narrow sense. Because there is only one possible way left that it could be logically impossible for both (a) Nothing to produce more than one causal system and (b) that system be entirely governed by only one physics, is if this universe we find ourselves in is the only logically possible universe. And if that’s the case, then we don’t need any explanation for it. All fine-tuning arguments sink immediately. The probability of any universe existing but this one (given that any universe exists at all) is then zero. And the probability of fine tuning without God is then exactly and fully 100%.

MY COMMENT:  A largely Valid point here: Richard is admitting that Nothing is such a big Unknown that it is conceivable that by some logic we don't yet understand Nothing entails that only one causally connected universe can exist and that this is the universe we observe (if perhaps only a small part of a much broader causally connected universe). But I doubt he'll bite this bullet: His concept of Nothing is his subliminal stand in for "The Unknown God" in so far as this mysterious Nothing somehow implies the highly organized universe we see around us. 

***


RICHARD: I doubt any theist will bite that bullet. I’m pretty sure all will insist that other universes are logically possible. 

MY COMMENT:   Theist or not I think we can be agnostic about whether or not other universes are logically possible. After all we know so little about this mysterious entity which Richard keeps calling "Nothing"; we don't even know if the logic of Nothing rules out cosmic configurations that otherwise to us seem logically possible. 

***

RICHARD: And if other universes are logically possible, it must necessarily be the case that it is logically possible either for different regions of a universe to exhibit different physics or different universes as closed causal systems to exist (with, ergo, different physics). Therefore, by disjunctive logic, if the second disjunct is ruled impossible (“different universes as closed causal systems can exist”), the first disjunct becomes a logically necessary truth (“different regions of a universe can have different physics”). Even if one were to say “there are infinitely many outcomes logically equivalent to a single universe with a single uniform physics” and “therefore” there are as many such outcomes as any version of multiverse and so “it’s fifty fifty” or “the measurement problem gets you” or whatever, Cantor strikes: as all the infinite such possible universes are already contained in possible multiverses and yet there are infinitely many more multiverses possible which cannot be included in the previous infinite set, the cardinality relation of possible multiverses to possible singleverses is still infinitely more; ergo, the probability of getting “a singleverse” rather than “a multiverse” is infinity to one against.

MY COMMENT:    Yes, I agree the number of possible multiverses, if compared against the number of possible singleverses, will be infinitely greater.  But if this relationship is to be transformed into a probability as per the last sentence (Viz the probability of a singleverse against a multiverse) Richard once again must assume the pre-existence of a sufficiently sophisticated observer to make the calculation of his probability meaningful. But Richard's logic here although valid is irrelevant; these observer relative probabilities imply nothing about what Richard's "Nothing" will in actual fact generate. 

***

RICHARD: Therefore, when there are no rules governing how many “universes” can randomly arise from Nothing, there must necessarily be either a random number of universes in the broad sense (causally separated systems) or a random number of universes in the narrow sense (regions of different physics within a single causal system), or both. Including, of course, the possibility that that number, either way, will be zero. Which is what it would mean for Nothing to produce nothing, to remain eternally nothing. Ex nihilo nihil, in other words, is simply describing one possible outcome of a true Nothing: the outcome of there being zero things arising.

But as we just confirmed, there is no rule or law that entails the number of things that will arise uncaused from Nothing is zero. In fact, zero is just one possibility out of countless other possibilities: countless other numbers of things, and thus universes, that can arise. And Proposition 6 entails each possible outcome has the same probability as each other possible outcome. Which means no outcome (such as “zero”) is more likely than any other (such as “one” or “ten billion” or “ten to the power of twenty trillion”). Hence, Proposition 9....

MY COMMENT:   And again, the bulk of the deliberations above are irrelevant. Richard's attempt to make numerical comparisons between classes of possible universes and thus arrive at one or other end of the probability spectrum is futile without building in his two hidden prior assumptions: To repeat: 1. The a priori existence of a sufficiently sophisticated cognitive perspective to make the probability calculations meaningful 2. In this particular connection, the a priori existence of the super contingency of random pattern generators to a give meaningful hook to the observer's probability calculations.

That Richard's "Nothing" is a huge Unknown to him is evidenced by the fact that above we find him considering the case where, for all he knows, Nothing has no known rules to limit the classes from which probabilities can be calculated. He then, yet again, wrongly thinks that from these probabilities he can logically infer a random pattern generator.  Moreover, random pattern generation is a rule in itself which contradicts any notion that Nothing has no rules. 

***

RICHARD: Proposition 9: If when there is Nothing every possible number of universes has an equal probability of occurring, the probability of Nothing remaining nothing equals the ratio of one to n, where n is the largest logically possible number of universes that can occur.

MY COMMENT:  Given that our Richard is admittedly working completely in the dark as to what the logic of Nothing entails then given such an advanced state of ignorance it is true that every possible universe has an equal probability of being generated by Nothing; and this includes the possibility of literally nothing being generated by Nothing. Well, there is only one way to generate absolutely nothing, so therefore Richard is right in telling us that the probability of nothing is 1/n, where n is the largest logically possible number universes that can occur. But yet again: We can't move from this state of hyper ignorance, expressed as a probability, to the conclusion that from this ignorance we can then infer a random generator of universes is at work. The quantified ignorance expressed by a probability evaluation tells us nothing about what Nothing will actually generate, least of all whether it will generate the hyper-complexities of random patterns.

***

RICHARD: But Proposition 6 entails n is transfinite. There is no maximum possible universes that can arise. This creates difficulties for continuing mathematically here, because no one has fully worked out a mathematics of transfinite probability. We can bypass that problem, however, the same way Archimedes originally did, by adapting the Method of Exhaustion. We’ll get there in a moment.

MY COMMENT:  No dispute that n is transfinite. But you bet there's going to be huge difficulties in defining intelligible probabilities here because measure problems make the definition of coherent ratios of possible universes highly problematic. But let's wait see what Richard's method of exhaustion entails. Something to look forward to in Part V.  

***


RICHARD: Proposition 10: If Nothing produces a random number of universes, nothing exists to prevent the contents of each of those universes from being equally random.

In other words, if it is logically possible for any universe, upon coming into existence, to have a different set of attributes than another, then each possible collection of attributes is as likely as every other. This follows by logical necessity from the absence of anything that would make it otherwise. And Nothing lacks everything, including anything that would make it otherwise. To deny this Proposition therefore requires producing a logical proof that some logical necessity makes it otherwise. Good luck.

MY COMMENT: Richard has not established that Nothing generates universes at random. All we've seen is that from the carefully measured human ignorance expressed as probabilities he's then assumed that this mysterious object he's called Nothing at least has the possibility of generating the high contingencies & complexities of randomness.  In fact in the above he does venture to assert something about Nothing; that is, that Nothing lacks everything, including anything that would make it otherwise. And yet he's somehow inferred that if Nothing produces a random number of universes, nothing exists to prevent the contents of each of those universes from being equally random. That is, he's allowing Nothing the possibility of generating the highly sophisticated complexes of random patterns. He has inferred that a lack of logical restriction logically entails the possibility of random patterns being generated. So, Richard where's the logical proof that there is some logical necessity which allows Nothing the possibility of generating these high contingencies? Good luck with that one Richard!


...to be continued

Thursday, January 02, 2025

The "Conversion" of Richard Dawkins

That makes me a Cultural Christian++.

Good on yer Richard! No complaints from me about that sentiment.


An interesting article has been published by William Dembski on the North American Intelligent Design Website Evolution News. This article is about atheist Richard Dawkins' "conversion" from a "lump-all-religions-together" as unequivocally evil to his mellowing attitude toward Christianity, a mellowing to such an extent that he now refers to himself as a "Cultural Christian" (although still an atheist at core of course). Read Dembski's article here...

 No. 3 Story of 2024: The New Cultural Christian | Evolution News

I'm glad to say that Dembski usually leaves me with a favourable impression especially as he has a tendency to not float some of the crassly stupid ideas that we get from other North American Intelligent Design pundits like Casey Luskin, Eric Hedin, Granville Sewell, Wesley J Smith etc. For more on the reactionary content of these NAID commentators and more see here:

Luskin: Quantum Non-Linearity: NAID Part IV: Evolution: Creation on Steriods

Smith: Quantum Non-Linearity: NAID Part V: Politics and North American Intelligent Design

Hedin & Sewell: Quantum Non-Linearity: NAID pundits Hedin and Sewell rightly criticized

Nametti and Holloway: Quantum Non-Linearity: Breaking Through the Information Barrier in Natural History Part 5

In contrast to the inept commentary we get from the above NAID pundits, the article by Dembski is a worthy read and he describes how Richard Dawkins himself has helped open atheism's pandora's box releasing the sociological monsters of societal, cultural & personal relativism, Marxist fundamentalism, postmodernism, nihilist secularism and the like*, the very things that stick in the gullet of not only Christians but also scientifically trained modernists like Dawkins whose chief axiom is that science is in the business of pursuing fruitful truthful goals. 

***


I did an essay on Richard Dawkins myself as far back as 1993 commenting in particular on the emerging postmodern unstable conceptual feedback that the descent into a nihilist version of atheism all too often leads to. See the following link for that essay.....

Quantum Non-Linearity: HOW TO KNOW YOU KNOW YOU KNOW IT

See also:

Quantum Non-Linearity: Evolution, Unstable Conceptual Feedback & Nihilism

***


Some of my other comments on William Dembski's work can be seen at these links.....

Quantum Non-Linearity: NAID pundit William Dembski on AI

Quantum Non-Linearity: Evolution and Islands of functionality

Quantum Non-Linearity: Extracts from a blog post by William Dembski

Quantum Non-Linearity: Dembski: oppressed by the suffocating trappings of piety

Quantum Non-Linearity: Dembski: “I’m not denying evolutionary gradualism, challenging common descent or Natural Selection” (!!!!)


ADDENDUM 11/01/25

In the following post on Evolution News Dembski mentions the so called "conservation of information"...

The Displacement Fallacy: Evolution’s Shell Game | Evolution News

This is something I disagree with Dembski about: Information can be created but only at a very slow logarithmic rate according to

I = S + Log (T)

Where I quantifies information, S is the starting information and T is a measure of the time needed to create I.  The slow creation of information has misled some to think that it can't be created. But this equation only applies for parallel processes. It is not necessarily valid for expanding parallelism (See here and here for more).

Another point of disagreement I have with Dembski is his epistemic filter. See here:

Quantum Non-Linearity: IDists! Here's another fine mess they’ve gotten us into!



Footnote

* One might lump these philosophies under the unhelpful heading of "Woke". "Woke" is a coarse-grained term which gets overused by the far-right to categorise any critics of far-right thinking such as myself: In their books I'm likely to be accused of being "Woke"! See here for example... Views, News and Pews: Woke vs. Unwoke.  And let me be clear that I regard the philosophy of the authoritarian religious far-right at least as dangerous that of the far left.