Showing posts with label Declarative Computation. Show all posts
Showing posts with label Declarative Computation. Show all posts

Wednesday, January 29, 2025

Bill Dembski's Information Conservation Thesis Falls Over


NAID's Stephen Meyer interviewed by Unwoke Right Wing Republican Dan Crenshaw. 
Evidence that NAID has become part of the  politicized culture war


I see that William "Bill" Dembski has done a post on Evolution News on the subject of the "Conservation of Information". The article is for the most part an interesting history of that phrase and goes to show that "information" has a number of meanings dependent on the discipline where it is being used, with Bill Dembski having his own proprietary concerns tied up with his support of the North American Intelligent Design (NAID) community.  See here for the article:

Conservation of Information: History of an Idea | Evolution News

Bill's particular information interest seems to lie with the so called "No Free Lunch Theorems". These theorems were about the mathematical limits on computer algorithms purposed to search for (and/or generate) configurations with properties of particular interest. Bill's focus on the "No Free Lunch Theorems" is bound up with the NAID community's challenge to standard evolution, a process which they see as a threat to their self-inflicted XOR creation dichotomy; viz: either "God Intelligence did it" XOR "Blind unguided natural forces did it" . 

But Bill gets full marks for spotting the relevance of these theorems to evolutionary theory: Evolution does have at least some features isomorphic with computer searches; in particular these theorems do throw some light on evolution's "search", reject and select mechanism which locks in organic configurations. So, the least I can say is that Bill's interest in the "No free lunch theorems" is based on what looks to be a potentially fruitful avenue of study. However, although it is true that the "No free lunch theorems" reveal interesting mathematical limits on computer searches, as we will see Bill has gone too far in trying co-opt these theorems for his concept of information conservation; in fact, to the contrary I would say that these theorems prove that Bill is wrong about the conservation of information.


                                                                                             ***

We can get a gut feeling for the No free lunch theorems with the impressionistic & informal mathematical analysis in this post. 

(Note: I arrived at similar conclusions in these two essays...

GeneratingComplexity2c.pdf - Google Drive

CreatingInformation.pdf - Google Drive 

These essays are more formal and cover the subject in more detail)

***

We imagine that we have a set of computer programs executing in parallel with the intention of finding out if at some point in their computations they generate examples of a particular class of configuration. These configurations are to be found somewhere in an absolutely huge domain of possible configurations that I shall call and which numbers D members, where D is extremely large.  It is a well known fact that most of the members of D will likely be highly disordered

A computer "search" starts with its initial algorithmic information  usually coded in the form of a character string or configuration S of length S. This configurational string contains the information informing the computing machinery how to generate a sequence of configurations C1C2,.....,Cn,.... etc. The software creates this sequence by modifying the current configuration Cn in order to create the next configuration Cn+1. A crucial operational characteristic of algorithms is that they are capable of making if-then-else type decisions which means that the modifications leading to Cn+1 will be dependent on configurational features found in Cn. It is this decisional feature of executed algorithms which gives them their search, reject and select characternot unlike evolution. This means that their trajectory through configuration space is often very difficult to predict without actually executing the algorithm. This is because the conditional decision-making of algorithms means that we can't predict what direction an algorithm will take at any one point in the computation until the conditions it is responding to have actually been generated by the algorithm. The concept of computational irreducibility is relevant here. 

In his article Bill is careful to describe the components of search algorithms, components which give them their search, reject & select character. But for my purposes we can simplify things by ignoring these components and only give cognizance to the fact that an algorithm takes its computer along what is possibly a non-linear trajectory in configuration space. We can also drop Bill's talk of the algorithm aiming for a specified target and then stopping since in general an algorithm can go on indefinitely moving through configuration space endlessly generating configurations as does conventional evolution. All we need to be concerned about here is the potentiality for algorithms to generate a class of configs of interest in a  "time" T where T is measured in algorithmic steps. 

                                                ***

If we have an algorithm with a string length of S then the maximum number of possible algorithms that can be constructed given this string length is Awhere A is the number of characters in the character set used to write the algorithm.

We now imagine that we have these possible As algorithms all executing in parallel for T steps. It then follows that the maximum number of configurations C which potentially can be generated by these possible algorithms of length S will be no greater than the limits set by the following relationship....

C <= As X  T

 Relation 1.0

...where C is the number of configurations that can be created in time T if the set of algorithms are run in parallel and assuming that a) T is measured in algorithmic steps and that b) the computing machinery is only capable of one step at a time and generates one configuration per step per algorithm.  

If the class of configurations we are interested in exist somewhere in a huge domain D consisting of D configurations and where for practically realistic execution times T:

                                     D >>> C

Relation 2.0

...then the relatively meager number of configurations our algorithm set can generate in realistic times like T are a drop in the ocean when compared to the size of the set of configurational possibilities that comprise D.  If relationship 2.0 holds then it is clear that given realistic times T, our "searches" will be unable to access the vast majority of configurations in D

With the above relationships in mind no free lunch starts to make some sense: If we are looking for algorithms which generate members of a particular class of configuration of interest (e.g. organic-like configurations) then for the algorithmic search to have a chance of succeeding in a reasonable time we require one of the following two conditions to be true...

1. Assuming that such exists then an algorithm of reasonable length S has to be found which is able to generate the targeted class of configurations within a reasonable time T.  However, if relationship 2.0 holds then it is clear that this option will not work for the vast majority of configurations in D.

2.  The alternative is that we invalidate relationship 2.0 by either a) allowing the algorithms of length S to be large enough so that A~ D, or b) allowing the execution time T of these algorithms to be sufficiently large so that T D,  or c) allowing that T and As when combined invalidate relationship  2.0. 

***

So, with the foregoing in mind we can see that if an algorithm is to generate a stipulated class of solution in domain D in a reasonable time T it either a) has to be logically possible to code the algorithmic solution in a starting string S of reasonable length S or b) we have to code the required information into a long string S of length S such that As ~ D. 

In case a) both S and T are of a practically reasonable magnitude from which it follows that given relationship 1.0 then little of the domain D  can be generated by such algorithms and therefore the majority of configurations that could possibly be designated as of interest in D (especially if they are complex disordered configurations) can not be found by these case-a algorithms. In case b) the starting string S, in terms of the number of possible algorithms that can be constructed, is commensurate with the size of D and therefore could possibly generate configurations of stipulated interest in a reasonable time. 

Therefore it follows that if we are restricted to relatively short algorithm strings of length S then these algorithms will only have a chance of reaching the overwhelming majority of configurations in D after very long execution times. If our configurations of designated interest are in this long execution time region in D these configurations will demand large values of T to generate. Long execution time algorithms, absent of any helpful starting strings which provide "short cut" information are I think what Bill calls "blind search algorithms". That emphasis on the word "blind" is a loaded misnomer which appeals to the NAID community for reasons which I hope will become clear. 

***


For Bill this what no free lunch means to him...

Because no-free-lunch theorems assert that average performance of certain classes of search algorithms remain constant at the level of blind search, these theorems have very much a conservation of information feel in the sense that conservation is strictly maintained and not merely that conservation is the best that can be achieved, with loss of conservation also a possibility

It's true that unless primed with the right initial information by far and away the majority of algorithms will reach most targets of an arbitrarily designated interest only after very long execution times involving laborious searching.....ergo, upfront information that lengthens S is needed to shorten the search; in fact this is always true by definition if we are wanting to generate configurations of interest that are also random configurations. 

So, the following is my interpretation of what Bill means by the conservation of information; namely, that to get the stipulated class of garbage out in reasonable time you have to put the right garbage in from the outset. The "garbage in" is a starting string S of sufficient length to tip the algorithm off as to where to look. The alternative is to go for searches with very long execution times T. So, paraphrasing Bill, we might say that his conservation of information can be expressed by this caricatured equation:

Gin = Gout

Relation 3.0

....where Gin represents some kind of informational measure of the "garbage" going in and Gout is the informational measure of the garbage coming out of the computation. But the following is the crucial point which as we will see invalidates Bill's conservation of information: Although relationship 3.0 gives Bill his conservation of information feel, it is an approximation which only applies to reasonable execution times.....it neglects the fact that the execution of an algorithm does create information if only slowly. That Bill has overlooked the fact that what he calls "blind searches" nevertheless slowly generate information becomes apparent from the following analysis.

***

If we take the log of relation 1.0 we get:


                                                         Log (C) <= S Log (A) + Log(T)

relation 4.0

The value C is the number of configurations that Aalgorithms will generate in time T and this will be less than or equal to the righthand side of the above relation. The probability of one these C configurations being chosen at random will be 1/C. Converting this probability to a Shannon information value, I, gives:

I = - Log (1/C) = Log (C)

relation 5.0

Therefore substituting I into 4.0 gives:

<= S Log (A) + Log(T)

relation 6.0

Incorporating Log (A) into a generalized measure of string length, S gives....

<= S + Log(T)

relation 7.0

From this relationship we can see that parallel algorithms do have the potential to generate Shannon Information with time T, and the information is not just incorporated from the outset in a string of length S. However, we do notice that because the information generated by execution time is the log function of T, that information is generated very slowly. This is what Bill has overlooked: What he derisively refers to as a "blind search" (sic) actually has the potential to generate information, if slowly. Bill's view is expressed further in the following quote from his article (With my emphases and with my insertions in red).....

With the no-free-lunch theorems, something is clearly being conserved [No, wrong] in that performance of different search algorithms, when averaged across the range of feedback information, is constant and equivalent to performance of blind search.[Log(T) is the "blind search" component] The question then arises how no free lunch relates to the consistent claim in the earlier conservation-of-information literature about output information not exceeding input information. In fact, the connection is straightforward. The only reason to average performance of algorithms across feedback information is if we don’t have any domain-specific information to help us find the target in question.[The "domain-specific" information is implicit in the string S of length in relation 7.0] 

Consequently, no free lunch tells us that without such domain-specific information, we have no special input information to improve the search, and thus no way to achieve output information that exceeds the capacities of blind search. When it comes to search, blind search is always the lowest common denominator — any search under consideration must always be able to do at least as good as blind search because we can always execute a blind search.[Oh no we can't Bill, at least not practically quickly enough under the current technology; we still await the technological sophistication to implement the expanding parallelism needed for "blind search" to be effective, the holy grail of computing. "Blind search" is a much more sophisticated idea than our Bill and his NAID mates are making out!] With no free lunch, it is blind search as input and blind search as output. The averaging of feedback information treated as input acts as a leveler, ensuring parity between information input and output. No free lunch preserves strict conservation [Tough, not true!] precisely because it sets the bar so low at blind search.

By distilling its findings into a single fundamental relation of probability theory, this work provides a definitive, fully developed, general formulation of the Law of Conservation of Information, showing that information that facilitates search cannot magically materialize out of nothing but instead must be derived from pre-existing sources.[False; information derives not just from S, but can also creep in from an algorithm's  execution time T ]

Blind search, blind search, blind search, blind, blind, blind,...... the repeated mantra of NAID culture which with its subliminal gnosto-dualism repeatedly refers to the resources of God's creation as a place of "blind natural forces". Sometimes you will also hear them talk about "unguided natural forces". But in one sense I would maintain the cosmos is far from "natural", and this is evidenced by the  sense of wonder its highly contingent form engenders among theists and atheists alike, all of whom can advance no logically obliging reason as to its highly organised configuration (accept perhaps Richard Carrier whose arrogance on this score would do Zaphod Beeblebrox  proud)

Bill's last sentence above is clearly false, as false can be; he's overlooked the slowly growing information term in relation 7.0. Information is not conserved during a search because the so-called "blind search" (sic) term is slowly, almost undetectably creating information. There is therefore no "strict conservation of information" (sic). That the so-called "blind search" (sic) is being understated by Bill and the NAID culture he represents becomes very apparent as soon as we realize that equation 7.0 has been derived on the assumption that we are using parallel processing; that is, a processing paradigm where the number of processors doing the computation is constant. But if we start thinking about the exponentials of a process which utilizes expanding parallelism the second term on the righthand side of 7.0 has the potential to become linear in T and therefore highly significant. This is why so much effort and cash is being put into quantum computing; Quantum computers clearly create information at a much more rapid rate and it is the monumental resources being invested in this line of cutting edge research which gives the lie to Bill's contention that information is conserved during computation and that somehow "blind search" rates as a primitive last resort. 


                                                                       ***

As far as the big evolution question is concerned I regard this matter with studied detachment. God as the sovereign author of the cosmic story could introduce information into the cosmic configuration generator using either or both terms in relation 7.0; in particular if unlike primitive humanity at our current technological juncture God has at his finger tips the power of expanding parallelism to crack the so called blind search problem the second term on the righthand side of 7.0 has the potential to become significant. Accordingly, I reject NAID's wrongly identified "blind natural forces" category when those forces are in fact highly sophisticated because they are in the hands of Omniscient Omnipotence. The trouble is that the NAID community have heavily invested in an anti-evolution culture and it looks like they've past the point of no return, such is their huge social and tribal identification with anti-evolutionism. Ironically, even if bog-standard evolution is true (along with features like junk DNA) we are still faced with the Intelligent Design question. As for myself I have no indispensable intellectual investment in either the evolutionist or anti-evolutionist positions.

                                                    ***


As I have remarked so many times before, what motivates NAID (& YEC) culture's aversion to the idea that information can be created by so-called "blind natural forces" is this culture's a priori anti-evolution stance. Underlying this stance, I propose, is a subliminal gnosto-dualist mindset, and this mindset in this subliminal form afflicts Western societies across the board, from atheism to authoritarian & touchy feely expressions of Christianity; in fact Western religious expression in general. But that's another story. (See for example my series on atheist Don Cupitt - a series yet to be completed)

What's compounded my problem with NAID & YEC nowadays is their embrace of unwoke political culture, a culture which automatically puts them at odds with the academic establishment. I'll grant that that establishment and its supporters have often (or at least sometimes) subjected outsiders (like Bill for example) to verbal abuse and cancellation (e.g. consider Richard Dawkins & the Four Horseman, RationalWiki etc.). This has help urge them to find friends among the North American far-right academia hating tribes and embrace some of their political attitudes (See here). As I personally by and large support academia (but see here) it is therefore likely that I too would be lumped together by the NAID & YEC communities as a "woke" sympathizer, even though I reject any idea that the problems of society can be finally fixed by social engineering initiated centrally, least of all by Marxist social engineering. But then I'm also a strong objector to far-right libertarian social engineering which seeks a society regulated purely by a community's use of their purses (and then be pray to the chaotic non-linearities of market economics and power grabbing by plutocratic crony capitalists). In today's panicked and polarized milieu the far-right would see even a constitutional Royalist like myself who is also in favour of a regulated market economy, as at best a diluted "socialist" and at worst a far-left extremist, ripe for the woke-sin-bin!



NOTE: An article on "Conservation of Information" has recently popped up on Panda's Thumb. See here: Conservation of arguments

Monday, April 29, 2024

NAID Part IV: Evolution: Creation on Steriods


Picture from: Natural selection – News, Research and Analysis – The Conversation – page 1


The Earlier Parts

The three previous parts of this series can be found at the end of these links:




***


Evolution's  a-priori information

The following is based on a paper which can be accessed here

The unconditional probability of life evolving, Prob(Life), is extremely small: It is equal to the ratio of the number of possible organic configurations to the total number of all possible configurations. Because living configurations are highly organised then according to disorder theory, the number of possible organic configurations is a minute fraction of the total number of configurations thus implying a tiny unconditional probability of life. 

If we now assume that standard evolution has actually taken place as a result of the right physical conditions being contrived, then the unconditional probability of life is given by this equation:

Prob (Life) = Prob (life, right conditions) x Prob (right conditions)

The first probability on the right-hand side of this equation is the conditional probability of life evolving, a probability calculated assuming the "right conditions" are in place. If evolution has occurred, this probability must be sufficiently large for there to be a reasonable chance of life evolving in a cosmos where its dimensions are considered to be part of the physical regime with the "right conditions". But because the value of the unconditional probability Prob(Life) is so miniscule this implies that a huge improbability must be embedded in the second factor on the righthand side of the above equation, namely the unconditional probability of the right conditions existing, Prob (right conditions). 

If we turn the above equation into a Shannon information equation by taking the negated Log of both sides, we get:

I(Life) = I(life, right conditions) + I(right conditions)

Since we require that Prob(life, right conditions) is a realistic probability this implies that the information value of the first term on the right-hand side won't be enormous. Hence, it is clear that most of the information required for the emergence of life will be embedded in the term "I(right conditions)".  Nevertheless, some information is embedded in I(life, right conditions) and this means that so-called "natural processes" do generate information if only a relatively small amount. This is contrary to what many Naive IDists and Biblical literalists try to push past us. I made this point about slow information creation by a physical regime in Part II ......but see the important qualifying footnote at the end about parallel processing vs expanding parallelism*.

But the main point is this: Conventional Evolution, if it is to work, must be the depository of huge amounts of a priori contingent information and this information is embodied in those "right conditions". Although a physical regime using parallel processing does generate some information (slowly) by far and away the greatest part is found in those given right conditions. In conventional evolution that information would be embedded in the "spongeam": See Part I for more on the spongeamConventional evolution, if indeed it is the process by which life has emerged, is necessarily an astonishingly sophisticated process in terms of its demand for information. 

In times past I might have referred to the necessary a priori information required to get evolution to work as "front-loaded" information, but I believe that expression has an implicit error: This is because the a priori information required by evolution effectively constrains the behavior of the cosmic system everywhere and everywhen and therefore it is more appropriate to talk of the ongoing input of information rather than this information being "front loaded"; an expression which smacks of deism. 

***


Joe Felsenstein

In my short contact with atheist and mathematical evolutionist Joe Felsenstein it was clear to me that he understood the implications of evolution; namely, that it must come with the necessary package of a-priori information in order to work and that this information is embodied in the smooth "fitness landscape" (i.e. the spongeam) that would allow evolutionary diffusion to find and settle on organic structures. Felsenstein's understanding here leaves us wondering whatever NAIDs Casey Luskin and Stephen Dilley are supposed to mean when they talk of "Evolution on its own" (See Part III). Cleary Felsenstein doesn't believe that there's such a thing as "Evolution on its own" because evolution necessarily comes packaged with a lot of information. Although, of course, Felsenstein doesn't believe that evolution's burden of a-priori information has its origins in Divine design. Instead, he decides to leave the conundrum of the origin of this information in the hands of physicists. Here's an extract from his comments on one of my blog posts ... (See here):  

You said: "In my reading of Dembski all he seems to be saying is that the fitness surface required for evolution to work is a very rare object in the huge space of mathematically possible fitness surfaces. Given its rarity and assuming equal a-priori probabilities it follows that the required fitness surface has a huge amount of information." And you also said "The mathematical fact is that smooth fitness spaces are extremely rare beasts indeed when measured up against the totality of what is possible."

If the laws of physics are what is responsible for fitness surfaces having "a huge amount of information" and being "very rare object[s]" then Dembski has not proven a need for Intelligent Design to be involved. I have not of course proven that ordinary physics and chemistry is responsible for the details of life -- the point is that Dembski has not proven that they aren't.

Biologists want to know whether normal evolutionary processes account for the adaptations we see in life. If they are told that our universe's laws of physics are special, biologists will probably decide to leave that debate to cosmologists, or maybe to theologians.

The issue for biologists is whether the physical laws we know, here, in our universe, account for evolution. The issue of where those laws came from is irrelevant to that.


...apparently Felsenstein sees the question of the origins of evolution's a-priori information as beyond his brief and in the realm of physics. This response by Felsenstein simply shelves the fact that the descriptive character of physics means we will always face the barrier of an absence of Aseity; that is, science is destined to remain in the domain of explanatory incompleteness. The "explanations" of physics ultimately present us with a "compressed information all the way down" barrier. Nevertheless, it is clear that NAID culture does no justice to evolutionists like Joe Felsenstein who are acutely aware of the question of evolution's a-priori information. And yet conversely it is also true to say that secular evolutionists have given no credit to the work of serious IDists like William Dembski who have been unfairly written-off as pseudo scientists; no wonder they've fallen into the embrace of the far-right! (See my next post on NAID's affair with the far-right)

***

Creation with a Vengeance

The question of whether a physical regime capable of evolution can exist and/or actually does exist presents some imponderables: Firstly, do physical regimes which set up the necessary conditions for evolution have at least a mathematical existence? Secondly, assuming this mathematical existence, how would one go about successfully finding and selecting these conditions?  Such a question, if it is a computationally irreducible question, may be beyond human ability to answer: In which case evolution is creation with a vengeance. 

Clearly, it is understood by competent commentators that evolution isn't some kind of random magic. Therefore, what theologian Rope Kojonen is saying (See previous parts) isn't news at all, as anyone who really understands evolution like Joe Felsenstein knows about the necessary conditions which must exist in the abstract landscape of configuration space for evolution to work. So, Casey Luskin, Stephen Dilley and Rope Kojonen seem to be barking up a tree that's been long since barked up before. Rope Kojonen is saying nothing new. What compounds the confusion, however, is that Casey Luskin and Stephen Dilley seem to be responding to Kojonen with incoherent nonsense as we have seen in the previous parts. 

***

None of this is to imply that I'm in anyway culturally committed to a particular stance on conventional evolution, whether of NAID culture or the academic establishment. But in my opinion the waters of Intelligent Design have been thoroughly muddied by Naive ID and as with questions like Junk DNA they've prematurely nailed their colours to the mast. The irony is that once one understands just what a working evolutionary system demands in the way of a-priori information intelligent creationism itself puts evolution back on the agenda. 



Footnote
* IMPORTANT QUALIFICATION
As I've said before the above considerations only apply in a cosmos whose processes work in parallel: They don't apply if the cosmos somehow employs the exponentials of expanding parallelism. There is, after all, a measure of expanding parallelism and even hints of declarative computation in quantum theory. 

Thursday, January 18, 2024

Galen Strawson on "Why is there something?"


"W

hy is there something rather than nothing? It’s meant to be the great unanswerable question. It’s certainly a poser. It would have been simpler if there’d been nothing: there wouldn’t be anything to explain".


So starts a Guardian article written by Galen Strawson* where he reviews a book by Philip Goff titled Why? The Purpose of the Universe. 

Usually, the word "Aseity" is only applied to God: The phrase "The Aseity of God" is intended to convey that in some way we don't understand God's existence is a self-explaining logical truism and therefore, the idea that God doesn't exist is a contradiction. For those who are uncomfortable with the kind of theism which posits an all-embracing totalizing sentience called "God" I suppose it is possible to attempt to apply the notion of Aseity to the secular cosmos; Viz: that the existence of the cosmos itself has some inherent logical necessity that we've yet to understand, if indeed "Aseity" can ever be humanly comprehended as it may involve infinities.

But as I have expressed many times before, whether the source of Aseity is sentient or not, that source isn't going to be found in conventional physics & science. This is because the laws of science as "explanations" merely describe. That is, they do not "explain" in a sense which addresses any inclination we may have toward believing that our perceived reality has its foundation in some kind of Aseity. Conventional science and physics work because the high organization and high registration in the patterns of our experiences makes it possible to describe those patterns in the succinct and compressed forms we call the "laws of physics/science".  No matter how compressed these forms are - and they can never compress to nothing - they will always leave us with a hard kernel of incompressible contingent information which has no further "explanation" than "It just is". As I wrote in this blog post: 

I favour the view that mathematics betrays the a-priori and primary place of mind; chiefly God’s mind. The alternative view is that gritty material elementals are the primary a-priori ontology and constitute the foundation of the cosmos and mathematics. But elementalism has no chance of satisfying the requirement of self-explanation as the following consideration suggests: what is the most elementary elemental we can imagine? It would be an entity that could be described with a single bit of information. But a single bit of information has no degree of freedom and no chance that it could contain computations complex enough to be construed as self-explanation. A single bit of information would simply have to be accepted as a brute fact. Aseity is therefore not to be found in an elemental ontology; elementals are just too simple.

Those who find the notion of God unacceptable nevertheless often betray an instinctual intellectual need for at least a non-sentient form of Aseity: We see hints of this instinct in the expression of puzzlement at the "unexplained" contingencies that science can only ever deliver (But see my quote from Bertrand Russell below). It seems that human intuition is confounded by brute-fact and yearns for deeper explanation, reason or cause (call it what you like) for the apparently arbitrary state of affairs the cosmos presents us with. Given the state of human knowledge then as the above quote from Strawson suggests, "nothingness" is actually the most reasonable state of affairs we can think of as it wouldn't demand any explanation at all. But in discussing these questions we really need to define just what we mean with words like "explanation", "reason" and "cause"; for as we have seen "scientific explanation" is in the final analysis mere description and in a deeply intuitive satisfying way is no explanation at all (But see Russell!) 


Anyway, continuing with Strawson's article...

STRAWSON: Some people think that if we knew more, we’d see that there couldn’t have been nothing. That wouldn’t surprise me. Others go further: they think we’d see that there couldn’t have been anything other than just what there is: this very universe, containing just the kind of stuff and laws of nature it does contain. That wouldn’t surprise me either, nor – I suspect – Einstein: “What really interests me,” he said, “is whether God had any choice in the creation of the world.” (Einstein’s God is a metaphorical device: “The word God is for me nothing more than the expression and product of human weaknesses.”)

MY COMMENT: Once again, we see the same intellectual hankering expressing itself here; namely, that the very existence of the cosmos is founded in some kind of logical necessity or has a profound "reason", "explanation" or "cause" - whatever those terms mean. Not only that, but some wonder if the very form and configuration of the cosmos (as described by its laws) is underwritten by logical necessities we have yet to comprehend. 


STRAWSONMost people who ponder these things take a different view. They think the universe could in fact have been different. They think it’s puzzling that it turned out the way it did, with creatures like us in it. They are tempted by the idea that the universe has some point, some goal or meaning. In Why?, Philip Goff, professor of philosophy at Durham, argues for “cosmic purpose, the idea that the universe is directed towards certain goals, such as the emergence of life” and the existence of value.

I’m not convinced, but I’m impressed. Why? is direct, clear, open, acute, honest, companionable. It manages to stay down to earth even in its most abstract passages. I’m tempted to say, by way of praise, that it’s Liverpudlian, like its author.

MY COMMENT: OK, so assuming the very existence of the cosmos is a necessity (even if we are unclear about the logic of that) the next question is why is the cosmos the way that it is? According to Strawson most people don't see logical necessity in the form of the cosmos even if its existence is a necessity; that is, it seems logically possible the cosmos could have had a different form altogether with different laws. So, according to Strawson, in response to this Philip Goff addresses the question of why the universe is as it is by proposing that the cosmos has goals and purpose, and these goals and purposes bring configuration & form. Goff is therefore implying that the cosmos is subject to teleological constraints. Or as I have put it many times in this blog using an algorithmic metaphor, the cosmos works like a declarative computation: that is, it is searching for declared goals: The cosmos has a declared computational purpose. 

But Strawson is not convinced ...too right he's not convinced: Teleology fits rather too well with an a priori sentient creator! Talk of "cosmic purpose" makes most paid up atheists feel very uncomfortable indeed. 


STRAWSON:The book has a double beat, like a heart: each chapter begins with a diastole, an admirably accessible section on its subject – consciousness, the point of life, the purpose of the universe (if any), the existence (or non-existence) of God – and closes with a systole, a more taxing “Digging Deeper” section.

Goff rules firmly against the traditional Christian God, omnipotent, omniscient and omnibenevolent – while backing the notorious “fine-tuning argument”, which goes roughly as follows: it’s so incredibly unlikely that a universe such as ours, containing life, consciousness and value, should have come into existence at all that we must suppose that some purpose has been at work, tuning things to come out as they have. It’s extremely hard to do this well, and Goff provides an intellectually aerobic primer on the logic of probability, and in particular the Bayes’ theorem, one of the core ideas of our day. His conclusion is as advertised in his title: nothing is certain, but the balance of evidence favours belief in cosmic purpose.

MY COMMENT: As I am unlikely to read Goff's book I can't challenge him on the specifics of his rejection of the Christain God; however, I assume that Goff has in his mind some sort of overarching sentience working out its will in the cosmos because only in the presence of sentience does the purpose, goal and meaning have any intelligibility. I personally have gone down the (Christian) theism route as the only way I can think of satisfying our need for Aseity, epistemic security, a sense of anthropic purpose and an account of human social & political failure in one swoop (Not to mention the need for human salvation). So for me the traditional Christain God is my way of trying to make sense of the human predicament and circumstances; if indeed the need to make ultimate sense of things has meaning beyond human strivings; after all it seems unlikely animals are plagued with the enduring curiosity which drives a lifetime of existential yearning for ultimate explanation and purpose. Animals appear to be satisfied to simply accept the earthly status quo, as long as it provides food and safety (Although there is evidence that at least some animals also prefer an interesting, varied & social environment. Although it is not clear that they are plagued by the existential angst over meaning and purpose)

I guess that Goff's Bayesian arguments are along the lines I've described in this document. However, I think I'd agree with the last sentence above: Viz: that according to Goff  nothing is certain, but the balance of evidence favours belief in cosmic purpose.


STRAWSON: The question is genuinely difficult. I’m bothered by the fact that many of the arguments for fine-tuning depend on varying the fundamental physical constants (eg the charge on electrons) while holding the existing laws of nature fixed. I can’t see why engaging in this curious activity could ever be thought to explain anything, or support any interesting conclusion. And if – as Einstein and I suspect – nothing could possibly have been different, the fine-tuning arguments collapse, as Goff acknowledges. But his discussion is ingenious and illuminating.

MY COMMENT: I think Strawson has a point here; that is that fine tuning cannot be coherently separated from the other aspects of the laws of physics. Using the algorithmic metaphor: It is clear that both initial conditions and the information inherent in the laws of physics form one package of curiously contingent fine tuning. Moreover, there is no known logical obligation which tells us why the cosmos should sustain itself moment by moment and place by place. Ergo, the so-called fine tuning of the fundamental constants is not the only enigma but so is also the maintenance of the known form of the laws of physics everywhere and everywhen. 

In his last sentence in the foregoing quote Strawson displays the same intuitive intellectual instinct which seeks some kind of Aseity "explaining" why the cosmos is as it is. Although I guess that in Strawson's case he would likely posit that that Aseity is to be found in a non-sentient object, rather than in the conventional notion of God. 


STRAWSON:  In the chapter on consciousness, Goff brings up the standard view that there’s a radical difficulty in explaining its existence. I think that those who believe this have gone wrong right at the start: they think – quite wrongly – that they know something about the nature of matter that makes it mysterious that consciousness exists. Wrong. There’s no good reason to think this, as Goff agrees. The solution is to suppose (along with a good number of winners of the Nobel prize for physics) that consciousness in some form is built into the nature of matter from the start. This view is known as panpsychism, and Goff ends his discussion with “a prediction: panpsychism will, over time, come to seem just obviously correct”.

MY COMMENT: I sort of agree with Strawson and Goff here: That is, that matter, if rightly configured has built into it the ability to generate conscious cognition. I stress rightly configured because I don't think our current AI simulations, no matter how good, are conscious; they are just simulations and don't use matter in a way which generates conscious thought. My long shot guesses at the way matter must be used to generate consciousness can be found in this paper.  See also my footnote below on idealism*

.

STRAWSON: Why? is a rich book. It aims high and ends with some good political reflections. It’ll turn quite a few heads. It should get the discussion it deserves. I don’t for all that think the universe has a purpose. I think it just is.

It does, though, seem to have a taste for complication. The balance of evidence is a delicate thing, but it seems at present to favour the view that something is going on that isn’t fully accountable for by the laws of physics. It’s nothing to do with “Nobodaddy” (William Blake’s name for the nonexistent Christian God), or any sort of goal, but Wittgenstein seems to be on the right track when he tries to express his sense of absolute or ethical value and finds it crystallised in one particular experience: “I wonder at the existence of the world”.

MY COMMENT: So, Strawson thinks the cosmos "just is" and without purpose. Bertrand Russell said something similar in his debate with Father Copleston:  

I should say that the universe is just there, and that's all [there is to it!]

Strawson's notion of a "just there" cosmos is consistent with what we understand about so-called "scientific explanation" which because in the final analysis is fundamentally just a form of description can only ever leave us at the contingent edge of a "just there" kernel of information. So it is no surprise that Strawson can only say “I wonder at the existence of the world”. Well, so do I but for me I have the urge to seek beyond the absurdity of a "just is" contingency to a deeper concept of explanation which satisfies the human yearning for purpose. Aseity based on a Christian concept of God and an account of human Sin are concepts I find no more absurd than a  "just is" cosmos and the moral, social and political perplexities it leaves us with. 


***

Strawson's reference to God as "Nobodadday" is a pointer to the attitude of many in the hyper-secularized atmosphere of elite intellectual culture; these communities look askance at theists and religionists and may even treat them with a mocking disrespect. Although hyper-secularized culture dominates academia and intellectual elite communities these groups are in many respects an anomaly in the sea of faith which is broad and full in the wider world. Billions of the world's population are religiously motivated and in notable cases those religionists of (authoritarian) faith dominate politics. If the hyper-secularized intellectual community think of those of faith as deplorables with absurd views it will only help polarize the religious populares against them and provide fertile ground for demagogues who will tell those religionists what they want to hear. The populares will turn to these demagogues for guidance rather than academia who they may perceive as part of a conspiracy to defraud them of their traditional values. In spite of their sneers, I personally support academia although I would criticize those like Strawson who hold a hyper-secular message of a "just is" cosmos, a paradigm which I find just as absurd as they might find my theism.  Moreover, as we know from the French revolution and various attempts to establish Marxism, hyper-secularism is also a high road to authoritarian traditionalist values, the re-emergence of a paradoxical secularized religion and the return of ruling demagogues. The political world of left and right isn't a flat space but is curved into a sphere where the extremes of left and right meet at the same authoritarian place. 



* Footnote on Idealism

I hold the view that conscious cognition exists because without it reality is an unintelligible notion: If reality doesn't deliver patterns of conscious experience and, at that, sufficiently organized experience for conscious cognition to be able to construct a rational ordered reality, then the meaning of reality is lost in the nebulous notion of "gritty matter" having an existence independent of sentient perception. So, reality is the conjunction of organized conscious experience, and this organization facilitates the construction of a rational world which conscious thought builds around organized experience. The Matrix teaches us that reality is the logic of experience. 

But if conscious thought is itself to classify as real it too must deliver a rational account of itself. It follows then that reality has a self-affirming, self-referencing character: Viz: Conscious perception of the cosmos gives the cosmos intelligibility and coherence; but if reality exists only if conscious thought delivers a rational account of it, then for conscious thought to classify as real it too must have a rational account of itself. So, as conscious thought gives coherence and substance to the concept of a highly organized material cosmos then the cosmos in turn accords reality to conscious cognition by returning a rational account of conscious thought. (See the introduction of this book where I first mooted this self-referencing account of reality). However, there is one big problem with this form of idealism: Human beings come in and out of existence and therefore cannot be the primary reality. This is why it becomes necessary to posit a primary overarching sentience which gives meaning and reality to the cosmos. 


* Guardian Footnote: 

Galen Strawson is a philosopher and author of Freedom and Belief (Oxford). Why? The Purpose of the Universe by Philip Goff is published by Oxford (£14.99). To support the Guardian and Observer buy your copy at guardianbookshop.com. Delivery charges may apply. From Friday 8 December 2023 to Wednesday 10 January 2024, 20p from every Guardian Bookshop order will support the Guardian and Observer’s charity appeal 2023.