Wednesday, June 26, 2024

Examining Mr. R. Carrier's use of Bayesianism. Part I


Richard Carrier's theological conceptions are of a similar quality  
to his use of Bayesian logic. But in this series of posts I critique the latter. 


My friend James Knight has drawn my attention to abrasive atheist Richard Carrier, in particular a post of his here:

Why the Fine Tuning Argument Proves God Does Not Exist • Richard Carrier Blogs

In his post Richard attacks the fine-tuning argument for a Divine Designer & Creator.  I've never been very keen on the fine-tuning argument myself as it smacks too much of God-of-the-gaps thinking; a kind of quasi-deistical thinking which perceives the Divine Creator to be acting only at the boundaries of the physical regime and that inside those boundaries is the domain of so-called natural forces. 

Nevertheless, if I were an evangelical atheist like Richard, I certainly wouldn't be arguing in the fashion he has argued. As we shall see he overworks Bayesian logic. 

Many people like Richard don't believe there is a God, but that doesn't stop them from betraying implicit theological beliefs about their concept of God and the implications which follow from the existence or nonexistence of the God their conceptions. As we shall see this is very apparent in Richard's post.

***

But firstly, Bayes theorem.  In this paper I looked at an application of Bayes theorem to the question of God by Christians Roger Forster & Paul Marsdon (F&M) in their book Reason and Faith. My interpretation of probability means that I find the application of Bayes theorem to the question of God's existence (or nonexistence for that matter) as not easily achieved coherently. 

I understand probability in frequentist terms: That is, the probability of an event is the ratio of cases favourable to the event to all possible cases. My argument for this definition of probability is presented in this paper. My AI Thinknet Project was based on the idea that the human mind is constructed to gather real world statistics, statistics which are then cast as a probabilistic association network. The required probabilities for this network come from three possible sources. Viz: 

1. Direct experience of the statistics of a reality sufficiently organized to facilitate human learning,   

2. Learning handed on by culture through language and group think. 

3. Hard wired instinctual learning. 

All this learning is expressed as the linkage weights of a vast association network. Although I would contend that this learning ultimately has its origins in a probabilistic-statistico-frequentist take on our highly organized cosmic reality, in the human mind it surfaces into human consciousness as degrees of belief. This fact would explain why sometimes Bayes theorem is cast in the form of degrees of belief. But as I show in my paper on F&M's application, Bayes theorem is easily derived from a statistical/frequentist concept of probability; it simply is an outcome of frequentist accounting. 

But there is a big but here: The trouble with a frequentist take on probability is that unless one can rigorously enumerate possibilities it becomes difficult to give coherent meaning to the probabilities of Bayes equation when the enumeration becomes difficult or is undefinable. In that case the fallback is on substituting into the equation those intuitive degrees of belief (some might call them prejudices!) that the human mind is apt to come up with. 

As an illustration of the difficulty here, let's for moment forget about God and simply ask ourselves this question: What is the probability of our universe existing with its very singular physical regime of laws? To evaluate this probability in the frequentist fashion we need to know the size of the set of possibilities we are selecting from in order to calculate the probabilistic ratio of cases favorable against the backdrop of all cases.  But the mind boggles at this question: The number of logically possible physical regimes of laws and constants seems too open-ended to be humanly enumerated and therefore even though we may at least have an inkling of the statistical weight of our own cosmos we cannot calculate the probability of our universe because the backdrop set of possibilities is all but indenumerable. Max Tegmark's mathematical universe does help address this problem: In his mathematical universe Tegmark postulates that all mathematically possible universes actually exist. So, if we accept this thinking then naturally enough we can then understand why we find ourselves in the kind of logically possible universe which is favorable to our existence. I'll leave a discussion of this bizarre suggestion of Tegmark's to those who are interested.

It seems that if our probability calculations are to be denumerably meaningful then an assumed physical regime must exist in the first place, and we must understand this regime sufficiently for us to be able to construct the ratios that define probability. Probability as a concept is something that is at its most meaningful when used within the frame of the cosmos, else it becomes problematical if used outside that frame. 

So, if we cannot coherently define the enumerations of probability when we ask ourselves about the probability of our cosmos, what hope is there when we include in this open-ended indenumerable frame the concept of divinity? Probability is a concept that is only intelligible when we construct it within a well-defined denumerable frame

Nevertheless F&M go ahead with their calculation of the probability of God, perhaps with their tongues firmly in their cheeks and in the spirit of "Let's use agnosticism's own probabilities to derive the probability of God".  What comes to the rescue are those very human instinctual degree of belief type probabilities which then allow F&M to slot guesstimated probabilities into Bayes otherwise meaningful equation. Unsurprisingly, using this procedure F&M arrive at a probability of God of all but one. 

Crucial to F&M's calculation is the following conditional probability:

P(H|!G) = 0.000....0001 

equation 1

where “...” represents billions of zeros (the notation !G means the negation of condition G)

Read the above equation as "the Probability of a habitable universe, H, if God, G, doesn't exist = negligible".  Once F&M have established this probability then from Bayesian logic it easily follows that the probability of God is within a minute smidgen of unity.  But therein lies a moot point. Where do F&M get the above equation from? The answer to that, quoting my article, is as follows: 

This seems to be based on a quote by Paul Davies where he remarks that random choices are extremely unlikely to arrive at the selection of factors required by a universe capable of evolving life. In fact the odds against such a configuration of factors coming together, if selected at random, is, according to Davies, “...one followed by a thousand billion, billion, billion zeros at least”. I believe the basic perception of Davies here is correct: The configurations we see around us are taken from an extremely rare class of possibility; therefore left to random selection alone we are looking at some absolutely minute probabilities of formation.

Given the nature of absolute randomness, a subject I explore in this book, it is undoubtedly true that within the denumerable random set (i.e. well defined set) a highly organized universe of our kind has a vanishingly small probability.  Now, from this well-defined fact it is at this point we tend to invoke two gut-feeling probabilities; Viz: 

 1) A completely random universe is not the kind of universe a divine intelligence would have anything to do with, and 

2) Our real-world experience tends to suggest that organization, particularly complex organization is a strong predictor of an originating human (and sometimes animal) intelligence; the cosmically internal probability from which this observation is derived is then ported to the outermost frame; it is not clear whether this is in fact a valid procedure, although it is intuitive and instinctive.

A similar instinct comes out strongly in Paley's watchmaker analogy, a notion which has been adopted by the contemporary NAID movement.  It is this kind of thinking which is at the root of equation 1 above. But I have my doubts about the soundness of the thinking behind it. Firstly, it is largely based on the activities of only one intelligence we have direct experience of; namely, biological human beings (possibly some animals as well). Secondly, what about the backdrop context in which the watch is found? The contemporary NAID movement score an own-goal here by skirting dangerously close to deism by contrasting intelligent action with so-called "natural forces"; the latter concept demeans the remarkable features of the natural context - in particular, evolution, which if it has occurred is a remarkably non-random process of breathtaking organised contingency. But all that is another story I tell elsewhere; the point here is that when we probe a bit deeper into nature we find that Paley's watch doesn't stand out from its background of so-called "natural forces". The "blind natural forces" vs. "Intelligent agency" polarity is a major dichotomy error of the NAID community. As a Christain theist the category of "blind natural forces" really sticks in my gullet; how can nature be blind when it is created everywhere and everywhen by a living omnipotent God?

One smart move by F&M is that they talk about a habitable universe; this is a far more general concept than what's behind the tendency to focus on the "fine tuning" of certain constants of nature, a feature which is only a small part of the much bigger algorithmic story of so-called "natural forces". By distraction, the fine-tuning argument tends to encourage an overlooking of the significant fact that these numbers are just a facet of a much greater whole which includes the dynamic of the laws of physics. These laws operate everywhere and everywhen and constitute a highly organized physical regime delivering organized contingency on a second-by-second basis across a huge cosmos. This physical regime is clearly capable of generating, maintaining and sustaining those highly organized complexes we call life. This is what F&M would identify as a habitable cosmos; and they are right: It is a highly improbable state of affairs in so far as its statistical weight is negligible in the denumerable context of sheer randomness; the latter being a regime that most people have an instinctual gut feeling would not be created by any divinity worth knowing. 

The only way a fully random universe can be considered habitable is if it is so enormous that the statistics of randomness demands that there would be tracts of space-time where a life supporting physical regime would hold for just enough time for it to host elementary life. But by the same token the statistics of randomness would require that most of these randomly created life supporting configurations are very transient and start to dissolve back into randomness as soon as they are created: The necessary physical laws which would maintain them would also be a very rare fluke of randomness and therefore very transient. 

The only thing that can be said for the postulation of a maximum disorder cosmos is that it is at least denumerable. But at a very deep gut level I personally find the postulation of an infinite randomness hopelessly absurd, implausible and above all meaningless. Moreover, it leaves me with a big nagging question: Why should the infinite complexity of randomness be thought of as of so fundamental significance to be the final answer to the conundrum of organised cosmic contingency? The statistics of maximum disorder is a description of reality in terms of the most extravagantly brute fact contingency one can imagine and one wonders at the mystery of a reality of maximum disorder: Nevertheless, let's not underestimate the capacity out there to come to terms with whatever the cosmos throws at us with the phrase "it just is"


Having made these preliminary remarks, it's time to look at Richard Carrier's article in more detail....next time...