Friday, December 20, 2019

Breaking Through the Information Barrier In Natural History. Part I

(This post is still undergoing correction and enhancement)

Propeller technology was always going to have a problem
breaking the sound barrier whereas jet technology didn't.

I was fascinated to read this post on Panda's Thumb by mathematical evolutionist Joe Felsenstein. The post is about the application of Algorithmic Information theory by Intelligent Design theorists to the evolution question. Felsenstein is rather concerned that these IDists are attempting to use Algorithmic Information to prove (yet again) that evolution is impossible. But once again their attempts go awry: They are over interpreting a genuine complexity/improbability barrier as an impossibility barrier. But it turns out to be no more an impossibility barrier than the sound barrier was to flight; with the right technology the barrier can be broken. And once again they interpret the barrier as the sign of some kind of information conservation law which prevents so-called "natural forces" bringing about the emergence of life.

But having said that let me say that my own position occupies a space somewhere between, on the one hand, the IDists who don't abide by evolution because it uses what they believe to be creatively inferior so-called "natural forces" and on the other the atheists who are determined to show that evolution is a cinch and more-or-less in the bag. The fact is some evolutionists (although this may not apply to Felsenstein) do not fully appreciate the information barrier that evolution actually presents (See Larry Moran for example). In fact the implicit parallel computational paradigm found in the standard concept of evolution is not going to be up to the task unless it starts with a huge burden of up-front information. This is where I believe IDists like William Dembski's work is relevant and valid as I have said before, although Dembski and his ID interpreters have inferred that Dembski's work also implies some kind  of "Conservation of Information", whatever that means.

The consequence of IDists over interpreting evolution's information barrier in terms of an absolute barrier is that they then conclude they have in their hands proof that "natural forces" cannot generate life and that some extra "magic" is needed to inject information into natural history in order to arrive at bio-configurations. They identify that extra magic not as the "supernatural" (that would look too "unscientific"!) but instead as "Intelligent agency"

To a Christian like myself, however, this IDist philosophy raises questions: For although there is a clear creation vs God dualism in Christian theology I find the implicit dualism within creation implied by de facto IDism problematic: If an omniscient and omnipotent God has created so-called "natural forces" then it would seem to be quite within his capabilities to provision creation in such a way that natural history could conceivably include the "natural" emergence of living configurations. The "magic" may already be there for all we know!

Moreover, it is clear that human intelligence, which is one of the processes of the created order, can "create information" and I don't think the IDists would deny that. And yet as far as we know human intelligence appears not to transcend God's created providence. IDists, however,  are likely to attempt to get round this observation by trying to maintain that human intelligence has a mysterious and extraordinary ingredient which allows it to create information - for example, I have seen IDists use Roger Penrose's idea that human intelligence involves incomputable processes as the mysterious super-duper magic needed to create information. Penrose's ideas, if correct, imply that human intelligence (and presumably the intelligence that IDists claim on occasions injects information into  natural history) cannot be described algorithmically. If this line of argument can be maintained then it would justify the IDists dualism. But this IDist paradigm can be challenged. For a start I believe Penrose was wrong in his argument about the incomputablility of human thinking; see here and here. Yes, human thinking may have some extraordinary ingredient of which we are unware, but it may have been part of the covenant of creation all along: I don't, however,  believe it to be an incomputable process.

This post (and the next post) is my take on the "information barrier" debate between evolutionists and IDists. I will not be going into the minutiae of how the IDists or their antagonists arrive at their conclusions but I will be looking at the conclusions themselves and comparing them with my own conclusions based on three projects of mine which throw light on the subject. Viz:

1. Disorder & Randomness. This project defines randomness algorithmically. 
2. The Melancholia I project: This project discusses the possibility of information creation and/or destruction.
3. The Thinknet project: This project defines the term "specified information" teleologically and notes the parallels with quantum mechanics.

Summarising my position: I can go some of the way with Dembski and the IDists in that there is an issue with standard evolution in so far as it demands some mysterious up-front information in order to work, but there is no such thing as "the conservation of information"; information can be destroyed and created; but in any case the arguments used by IDists are unsafe because there is more than one understanding of what "information" actually is. It is likely that the IDist's "conservation of information" may result because we are most familiar with linear and parallel computing resources, a computing paradigm that has difficulty breaking through the information barrier. This contrasts, as we shall see, with the exponential resources of expanding parallelism and the presence of these resources exorcises the dualists ghost in the machine which haunts IDist philosophy. On the other hand some atheists are unaware that there is an information barrier (probably not true of Joe Felsenstein - see here and here) and are unlikely to see Dembski's work as laying down a serious challenge.

As usual I don't dogmatically push my own ideas from a polemical partisan soap box seeking conversions to my case. For me this is a personal quest, an adventure and journey through the spaces of the mind which quite likely may not lead anywhere. The journey, as I often say, is better than the destination.

***

In this post I want to introduce the information barrier via William Dembski's work. In the video of a lecture I embedded in my blog post here Dembski introduces his concept of the "conservation of information" via the following simple relationship:

Probability of life being generated by a given physical regime =  r <  p/q
1.0
....where p is the unconditional probability of life and where q is the conditional probability of life given a physical regime whose probability is r.

I give an elementary proof of this theorem in the said blog post. Relationship 1.0 will tell us what we are looking for if we rearrange it a bit. Viz:

q < p/r
2.0
Now, we expect p, which is the unconditional probability of life, to be very small; that is, if we were using a computational method which involved choosing molecular configurations at random, then it is fairly obvious that the complex organisation of configurations capable of self replication and self perpetuation will, by virtue of their rarity in configuration space, have an absolutely tiny probability. If we want to fix this problem of improbability and get a realistic chance of life emerging then conceivably we could contrive some physical regime whereby the chance of life coming about with a better than a random selection computation is q, where q >> p and where q is the conditional probability of life; that is the probability of life given the context of the physical regime.  But if q is to be realistic then from 2.0 it follows that we must have r ~ p; that is, the only way of increasing the conditional probability of life is to first select a highly improbable physical regime. As Dembski points out the improbability has now been shifted onto the probability of the physical regime. Dembski's point becomes clearer if we convert our probabilities to formal "information" values as follows.

Now, the so-called information function I(p), as used by Dembski, is defined for a probability of p using:

I(p) = - ln (p)
3.0

...where ln is the natural logarithm function. From this definition it is clear that for very small values of p function 3.0 is going to return a large value of I; that is a large "information" value.

The so-called "conservation of information" becomes clearer if we first take the natural log of expression 2.0, followed by applying definition 3.0 and then with a bit of rearranging we arrive at:

  I(q) + I(r) = I(p
4.0

Those looking for "natural explanations" don't expect the emergence of life from a "natural" physical regime to be a surprise but rather a very "natural" outcome given that regime. This is tantamount to requiring that I(q), the "surprisal" value of the conditional emergence of life, to be relatively low. Trouble is, because I(p) is so high, it follows from 4.0 that if I(q) is to be low then I(r), the information value of the physical regime, is necessarily very high. Relationship 4.0 being effectively a "zero sum game" expression  means that something has to soak up the information; either I(q) or I(r) or both. We are therefore always destined to be surprised by the extreme contingency nature flings at us from somewhere within equation 4.0. So, at first sight we seem to have an information conservation law expressed by 4.0.

Relationship 4.0 is in fact borne out by a closer look at conventional evolution, a process which somehow generates structures that in an absolute sense are highly improbable. Joe Felsenstein himself implicitly acknowledges equation 4.0 in his suggestion that the information for life is embodied in the physical regime we call the "laws of physics". (see here and here). If so then from 4.0 we infer that these laws must be a very improbable state of affairs and therefore of very high information. Evolution as it is currently conceived requires that this information expresses itself in what I call the "spongeam" about which I say more in this blog post. (Actually, my opinion is that the spongeam doesn't exist and that some other provision applies - more about that another time)

Equation 4.0 is beguiling: It seems to come out of some simple and rigorous mathematics. But it embeds an assumption. That assumption is that I(p) has a very high value because we assume from the outset a computation method which involves a serial "throwing of the die" as it were, a method which is going to require many, many conventional computational serial steps and therefore has a prohibitively high time-complexity as far as practice is concerned*2. But then if we have dice rather than just a die we can then have more than one trial at a time and the chances of creating life by chance alone increase, although it is clear that there would have to be an enormous number of parallel trials to return a significant probability of generating living configurations in this way. This multi-trials technique is effectively the brute force resort of the multiverse extremists. It is a fairly trivial conclusion that increasing the number of parallel trials has the effect of "destroying" information in that increasing trial numbers increases the probability of an outcome and so its information value goes down. Clearly in the face of huge numbers of parallel trials an outcome, no matter how oddly contingent it might be, is no longer a "surprise" in such a "multiverse". Not surprisingly this concept appeals to anti-theists who feel more at home in a Godless multiverse.

In the paper linked to in this post of the Melancholia I project I looked into the effect of increasing parallel trial numbers and in particular I considered the subject of expanding parallelism in the generation of outcomes. It's a fairly obvious conclusion that increasing parallel trials increases the probability of a result! But it also goes to show, perhaps a little less obviously, that information isn't conserved in such a context; in fact in this context information is effectively destroyed by the increasing trial numbers and in particular by expanding parallelism. This, sort of thing is likely to go down well with anti-theists because the "surprisal" value ( i.e. -ln(p) ) associated with outcomes is eroded, although of course anti-theists may still be surprised that such a multiplying system exists in the first place!

As I contend in this post multiverse ideas which posit a sufficient number of trials needed to destroy our surprise at the universe's amazing organised contingencies leaves us looking out on a cosmos whose aspect is empty, purposeless and anthropologically meaningless. And yes, I say it again; this kind of universe suits the anti-theists down to the ground; it seems to be the sort of universe they eminently prefer. But in spite of that there is something to take away from these multiverse ideas, in particular the idea of expanding parallelism, hinted at by quantum mechanics, and which  is evidence of the potential availability of huge computational resources. Given the concept of omniscience & omnipotence implicit in the Christian notion of God, positing the existence of these huge computational resources doesn't seem so outrageous. But in a Christian context the computational potential of expanding parallelism has, I suggest, purpose and teleology and is in fact evidence of a declarative seek, reject and select computational paradigm.

***

Although the -ln(p) concept of information used by Dembski succeeds in quantifying some of our intuitions about information it does have some notable inadequacies and it is these inadequacies which take us on to the subject of Felsenstein's Panda's Thumb post, namely Algorithmic Information theory. Let me explain...

Ironically I called the paper that explores the subject of expanding parallelism "Creating Information" rather Destroying Information. This is because my Melancholia I project is really about a concept of information very different to -ln(p). The need for this different concept of information becomes apparent from the following considerations. Although the function -ln(p) adequately quantifies our level of surprisal at outcomes this definition of information is not good at conveying the idea of configurational information. Take this example: The chances of finding a hydrogen atom at a designated point in the high vacuum of space is very small and therefore we have  a high information event here should it happen. But it is a very elementary event, an event which only conveys one bit of information: 'yes' or 'no' depending on whether a hydrogen atom appears or not. The trouble is that a one bit configuration is hardly what one would like to call a lot of information! Therefore we need something that is better at conveying quantity of information. From the function -ln(p) it follows that a one bit configuration can "contain" the same amount of information as a large n-bit configuration. This doesn't feel very intuitive, particularly if we are dealing with potentially large and complex configurations; it seems intuitively disagreeable to classify a complex configuration as possibly having the same level of information as a one bit configuration.*1

Algorithmic information theory attempts to measure the information content of a configuration via its computational complexity and this returns a measure of information which agrees with our intuitive ideas about the quantity of information found in a configuration, something that -In(p) doesn't necessarily convey.  However, using this concept of information we find that once again the IDists think they have stumbled on another information barrier that scuppers any creation of life by those inferior but dreaded "natural forces"! In the next post this contention will take me into the subject of my book on Disorder and randomness which also deals with algorithmic information theory. Once again we will find that expanding parallelism bursts through the information barrier. Although Joe Felsenstein and his buddies certainly won't need any help from me to engage the IDists I will in fact be using my own concept of Algorithmic Information to look into the IDist claims because it provides me with something immediately to hand.

There is one more ingredient that needs to be added to the mix to complete the picture of information creation and this is an ingredient which will certainly not be to the taste of anti-theists whose world view is one of a purposeless cosmos without teleology. I'm talking of my speculative proposal that the cosmos isn't working to some meaningless and mindless procedural process that just goes on and on and leads to nowhere but rather is operating some kind of purposeful declarative computation that uses expanding parallelism to seek, reject and select outcomes. It is in this context that the notion of specified information suddenly jumps into sharp focus; in fact I touch on this subject toward the end of the paper I've linked to in part 4 of my Thinknet project. (See section 11).

I have to confess that if the cosmos is using a purposeful declarative computational paradigm that makes use of expanding parallelism I'm far from having all the details: All I have currently is an understanding of the effect that expanding parallelism has on computational complexity and the metaphor of my Thinknet project which seems to have parallels with quantum mechanics; quantum mechanics looks suspiciously like a seek, reject and select declarative computation which taps into the resources of expanding parallelism. Contrasting alternatives to my conjectures are that we either have  the anti-theist's meaningless procedural multiverse or the primitive notion that God did indeed simply utter authoritarian magic words and via brute omnipotence was able to speak stuff into existence! The latter seems very unlikely theologically speaking: David Bump, who is a (nice) young earthist Christian I am currently corresponding with, has kindly and respectfully supplied me with a long document of his thoughts on what it means to be a Christian who sees God as creating things via spoken words  "as is" about 6000 years ago. Frankly I doubt it! As I have analysed David's arguments I have found that for me all this leads to huge theological problems and unless I turn to fideism these problems don't look as though they are going to go away! I will in due course be publishing my response to David.

Footnotes:
*1 A bit stream can carry a lot of information in the sense of definition 3.0 because its information value is a product of many probabilities and this may equate to a very small probability and therefore a correspondingly high information content. But the trouble with -ln(p) is that a one bit configuration could be equally as information laden. Another problem with -ln(p) is that once a configuration becomes a "happened event" and is recorded, all its information is lost. This is because probability is a measure of subjective knowledge and therefore once a large configuration becomes known ground, no matter how complex, it loses all its information.... a sure sign that "information" in this subjective sense is easily destroyed and therefore not conserved.

*2 There is also another assumption here (or perhaps it's a confusion rather than an assumption) that probability and randomness are identical concepts - they are not: See my paper on Disorder and Randomness. Dembski uses the principle of indifference in assigning equal probabilities to outcomes for which no prior knowledge exists as to why one outcome should be preferred over the other and hence the outcomes from this subjective point of view have equal probability. This procedure is correct in my opinion; but two outcomes which subjectively speaking have an equal probability are not necessarily equally random; randomness is an objective quality deriving from confrontational disorder. 

Tuesday, December 10, 2019

Moral Relativism





It is true that atheism doesn't set people up well to resist the intellectual pathologies found in the extremes of postmodernism and nihilism; these philosophies are like corrosive acids liable to eat away at not only one's grasp on rationality and truth. but also of one's morality. The only defence are the deep heart felt instincts supporting good community which, of course, many atheists feel as strongly as anyone else (See Romans 2:14-16). But other than having the status of being identified as strong social instincts there is little more these instincts have to commend themselves to the atheist world view other than in these social relativist terms; any cosmic absoluteness to morality (and even rationality) is in the final analysis completely lost. 

In this connection I was intrigued by a post on the de facto intelligent design web site "Uncommon Descent" by its supremo Barry Arrington in which he posted his response to the comments of two atheists named as Ed George and Seversky. These two talk about morality in the context of their atheism.  Here's the first part of Arrington's post: 

ARRINGTON: Ed George asserted that morality is based on societal consensus.  Upright Biped utterly demolished that argument.  See here.  Seversky and Ed tried to respond to UB’s arguments.

Let’s start with Sev:

"I, like everyone else here, would also want [the rape] to stop. Why? I should not have to say this but it is because we can imagine her suffering and know that it is not something we would like to experience nor would we want to see it inflicted on anyone else. It’s called empathy and its derived principle of the Golden Rule which, in my view, is more than sufficient grounds for morality."

MY COMMENT: Well done Seversky! Empathy is the ultimate (God given) rationale for morality as we shall see. It is this rationale which motivates the succinct expression of moral code embodied in the Golden Rule. One can hardly complain if Severersky carries out a thoroughgoing implementation of this rule (which of course no human being, apart from one, can do perfectly). But for a thoroughgoing atheist there is no ultimate reason why this Golden Rule should have any claim to an absolute status; after all, it is quite likely that on the basis of a minimalist survival ethic one can imagine social contexts where putting self first may be a "better" strategy (whatever "better" means in this context). Ironically an unbridled free market may illustrate the potential for moral perversity in a world without moral absolutes: For example some claim that rampant selfish self-betterment is supposed to lead to a wealth "trickle-down" affect, an affect which from a survival point of view benefits everyone. 

Nevertheless Barry has a good starter here for sharing and promoting  a common moral rationale and perhaps discussing what the origins of this rationale might be. But unfortunately he blows his chance:

ARRINGTON: This is a muddled mashup of two of the materialists’ favorite dodges.  First Sev appeals to empathy as the basis for morality.  He completely ignores several problems with this argument, including:

1.  Mere feelings are a very flimsy ground for a moral system.

2.  Some people do not have empathy (we call them sociopaths).  If empathy is the basis for morality, a sociopath has no basis for morality.


MY COMMENT:  Contrary too what Arrington is claiming here the existence of consciousness cognition (which is the context in which feelings have meaning) is the only ground for a moral system as we shall see.

Arrington inadvertently acknowledges the crucial moral role played by empathy in his reference to sociopaths; when it's not present things go badly wrong. Sociopaths have something about them which means they have no regard for the feelings of conscious cognition. To get an inkling of what it may be like to be sociopathic think of some of those realistic "shoot-em-up" computer games: Human game players have no compunction in shooting up gaming entities simply because there is no consciousness cognition to empathise with! In a sense human beings who live good moral lives outside the games environment turn into "sociopaths" of sorts when they play computer games in so far as they have no empathy (and rightly so!) for the simulated beings. These simulated entities have no consciousness and therefore no feelings. So consciousness changes everything. Perhaps it is not surprising that some atheists are inclined to deny the reality of the first person perspective of conscious cognition (as Arrington well knows - See here). For some atheists the reality of the first person perspective has just too much mystique; if there really is such a strange thing as a first person perspective inaccessible to third person observation then who knows perhaps there's even a......

But although empathy is the ultimate rationale for morality there is a difference between empathy and moral systems. Moral systems are there to best serve a society of conscious cognitions and therefore with out conscious cognition moral systems are without meaning, goals and purpose. Therefore moral systems are a means to an end rather than an end in themselves. For if human beings, like the facades of computer game entities, are a mere simulacrum with no first person perspective behind them then moral systems are purposeless and meaningless. A moral system is a code of behaviour that is cognisance of people's feelings in the context of community,

Moral systems, however, can be intellectually taxing: It is difficult for humans to anticipate all the ramifications of their behaviour in a social context and come to a reliable opinion on which moral systems best serve a community of interacting conscious entities. The moral challenge humans face therefore resolves itself into two challenges; firstly the challenge of raising a sufficient empathetic concern for other conscious entities and secondly the epistemic challenge of having to work out which moral system best serves community interests: Human beings, of course, are only capable of  imperfectly  responding to both challenges.  But even if we assume that a community is composed of perfectly empathetic beings anxious to get a moral system in place that best serves the community (clearly an idealistic assumption) there remains the problem that human epistemic limitations will imply they are unlikely to discover a moral code that best serves community.

The Golden rule is a neat one liner which sums up the spirit of moral systems but the complexity of community means that the devil is going to be in the detail; the system of moral code that best serves a community of conscious beings is going to be only fully understood if  one has divine omniscience. 

After Arrington's weak start, however, things improve:


ARRINGTON: 3.  Even for those with empathy, Sev offers no reason why they should not suppress their feelings if they believe the pleasure of their act exceeds the cost of the act in pangs of empathy.

Next Sev appeals to the Golden Rule as a ground for morality.  Well, Sev, it certainly is.  Yet, materialism offers no ground on which to adhere to the Golden Rule as opposed to any other rule such as “might makes right” or “if it feels good do it.”  Sev demonstrates yet again that no sane person actually acts as if materialism is true.

Sev, if you have to act as if your most deeply held metaphysical commitment is false as you live your everyday life, perhaps you should reexamine your metaphysical commitments.

MY COMMENT: Arrington's third point above does make headway: On what basis, other than ephemeral instinct, should anyone be troubled by the consciousness of other human beings. and instead live for self? For if as some atheists maintain consciousness is just an illusion constructed from a complex social interface why bother with it and instead just play as if one is in a computer game? But contrariwise I suppose all said and done an atheist could still claim that whilst moral instincts and code have no real absolute cosmic significance this doesn't stop people behaving instinctively with empathy and using the Golden rule. Let's hope that remains the case..... there is an unfortunate human history of principles based on bad ideology over ruling compassion all the way from the Nazis, through Christian fundamentalists to the French and October revolutions.

ARRINGTON: Now let’s go to Ed, who writes:

. . . UB’s question is not worth responding to

Ed states that a person who lives by himself has no moral obligation to anyone who venture near him.  UB points out that if that is true, Ed has just given said loner a license to rape any woman who ventures too near without breaking any moral injunction.  Instead of abandoning his screamingly stupid assertion, Ed pretends UB’s extension of Ed’s premises to their logical conclusion is “not worth responding to”. Ed is not only stupid.  He is a coward.

MY COMMENT:  ... or alternatively what does the loner do if he sees someone who desperately  needs help? (for example a child drowning in a pond? - assuming the loner can swim). Here we have an example of how the futility and purposelessness  implicit in atheism can have a corrosive affect on one's sense of what is right.

But don't let anyone go away thinking that I'm suggesting that it is only atheists whose morality is subject to corruption: As we well know those who think they have a moral code sanctioned by divine authority and go on to implement it without cognizance of the first person perspective, are also liable to corruption; especially so if they think their reading of scripture provides an all but direct, easy and utterly certain revelation of the divine will. Whether a moral system is arrived at from first principles or based on an interpretation of Holy Writ, the fact is you can't trust human beings to get it all right!


Relevant Link
https://quantumnonlinearity.blogspot.com/2018/05/the-foundation-of-morality.html