Friday, December 20, 2019

Breaking Through the Information Barrier In Natural History. Part I

(This post is still undergoing correction and enhancement)

Propeller technology was always going to have a problem
breaking the sound barrier whereas jet technology didn't.

I was fascinated to read this post on Panda's Thumb by mathematical evolutionist Joe Felsenstein. The post is about the application of Algorithmic Information theory by Intelligent Design theorists to the evolution question. Felsenstein is rather concerned that these IDists are attempting to use Algorithmic Information to prove (yet again) that evolution is impossible. But once again their attempts go awry: They are over interpreting a genuine complexity/improbability barrier as an impossibility barrier. But it turns out to be no more an impossibility barrier than the sound barrier was to flight; with the right technology the barrier can be broken. And once again they interpret the barrier as the sign of some kind of information conservation law which prevents so-called "natural forces" bringing about the emergence of life.

But having said that let me say that my own position occupies a space somewhere between, on the one hand, the IDists who don't abide by evolution because it uses what they believe to be creatively inferior so-called "natural forces" and on the other the atheists who are determined to show that evolution is a cinch and more-or-less in the bag. The fact is some evolutionists (although this may not apply to Felsenstein) do not fully appreciate the information barrier that evolution actually presents (See Larry Moran for example). In fact the implicit parallel computational paradigm found in the standard concept of evolution is not going to be up to the task unless it starts with a huge burden of up-front information. This is where I believe IDists like William Dembski's work is relevant and valid as I have said before, although Dembski and his ID interpreters have inferred that Dembski's work also implies some kind  of "Conservation of Information", whatever that means.

The consequence of IDists over interpreting evolution's information barrier in terms of an absolute barrier is that they then conclude they have in their hands proof that "natural forces" cannot generate life and that some extra "magic" is needed to inject information into natural history in order to arrive at bio-configurations. They identify that extra magic not as the "supernatural" (that would look too "unscientific"!) but instead as "Intelligent agency"

To a Christian like myself, however, this IDist philosophy raises questions: For although there is a clear creation vs God dualism in Christian theology I find the implicit dualism within creation implied by de facto IDism problematic: If an omniscient and omnipotent God has created so-called "natural forces" then it would seem to be quite within his capabilities to provision creation in such a way that natural history could conceivably include the "natural" emergence of living configurations. The "magic" may already be there for all we know!

Moreover, it is clear that human intelligence, which is one of the processes of the created order, can "create information" and I don't think the IDists would deny that. And yet as far as we know human intelligence appears not to transcend God's created providence. IDists, however,  are likely to attempt to get round this observation by trying to maintain that human intelligence has a mysterious and extraordinary ingredient which allows it to create information - for example, I have seen IDists use Roger Penrose's idea that human intelligence involves incomputable processes as the mysterious super-duper magic needed to create information. Penrose's ideas, if correct, imply that human intelligence (and presumably the intelligence that IDists claim on occasions injects information into  natural history) cannot be described algorithmically. If this line of argument can be maintained then it would justify the IDists dualism. But this IDist paradigm can be challenged. For a start I believe Penrose was wrong in his argument about the incomputablility of human thinking; see here and here. Yes, human thinking may have some extraordinary ingredient of which we are unware, but it may have been part of the covenant of creation all along: I don't, however,  believe it to be an incomputable process.

This post (and the next post) is my take on the "information barrier" debate between evolutionists and IDists. I will not be going into the minutiae of how the IDists or their antagonists arrive at their conclusions but I will be looking at the conclusions themselves and comparing them with my own conclusions based on three projects of mine which throw light on the subject. Viz:

1. Disorder & Randomness. This project defines randomness algorithmically. 
2. The Melancholia I project: This project discusses the possibility of information creation and/or destruction.
3. The Thinknet project: This project defines the term "specified information" teleologically and notes the parallels with quantum mechanics.

Summarising my position: I can go some of the way with Dembski and the IDists in that there is an issue with standard evolution in so far as it demands some mysterious up-front information in order to work, but there is no such thing as "the conservation of information"; information can be destroyed and created; but in any case the arguments used by IDists are unsafe because there is more than one understanding of what "information" actually is. It is likely that the IDist's "conservation of information" may result because we are most familiar with linear and parallel computing resources, a computing paradigm that has difficulty breaking through the information barrier. This contrasts, as we shall see, with the exponential resources of expanding parallelism and the presence of these resources exorcises the dualists ghost in the machine which haunts IDist philosophy. On the other hand some atheists are unaware that there is an information barrier (probably not true of Joe Felsenstein - see here and here) and are unlikely to see Dembski's work as laying down a serious challenge.

As usual I don't dogmatically push my own ideas from a polemical partisan soap box seeking conversions to my case. For me this is a personal quest, an adventure and journey through the spaces of the mind which quite likely may not lead anywhere. The journey, as I often say, is better than the destination.

***

In this post I want to introduce the information barrier via William Dembski's work. In the video of a lecture I embedded in my blog post here Dembski introduces his concept of the "conservation of information" via the following simple relationship:

Probability of life being generated by a given physical regime =  r <  p/q
1.0
....where p is the unconditional probability of life and where q is the conditional probability of life given a physical regime whose probability is r.

I give an elementary proof of this theorem in the said blog post. Relationship 1.0 will tell us what we are looking for if we rearrange it a bit. Viz:

q < p/r
2.0
Now, we expect p, which is the unconditional probability of life, to be very small; that is, if we were using a computational method which involved choosing molecular configurations at random, then it is fairly obvious that the complex organisation of configurations capable of self replication and self perpetuation will, by virtue of their rarity in configuration space, have an absolutely tiny probability. If we want to fix this problem of improbability and get a realistic chance of life emerging then conceivably we could contrive some physical regime whereby the chance of life coming about with a better than a random selection computation is q, where q >> p and where q is the conditional probability of life; that is the probability of life given the context of the physical regime.  But if q is to be realistic then from 2.0 it follows that we must have r ~ p; that is, the only way of increasing the conditional probability of life is to first select a highly improbable physical regime. As Dembski points out the improbability has now been shifted onto the probability of the physical regime. Dembski's point becomes clearer if we convert our probabilities to formal "information" values as follows.

Now, the so-called information function I(p), as used by Dembski, is defined for a probability of p using:

I(p) = - ln (p)
3.0

...where ln is the natural logarithm function. From this definition it is clear that for very small values of p function 3.0 is going to return a large value of I; that is a large "information" value.

The so-called "conservation of information" becomes clearer if we first take the natural log of expression 2.0, followed by applying definition 3.0 and then with a bit of rearranging we arrive at:

  I(q) + I(r) = I(p
4.0

Those looking for "natural explanations" don't expect the emergence of life from a "natural" physical regime to be a surprise but rather a very "natural" outcome given that regime. This is tantamount to requiring that I(q), the "surprisal" value of the conditional emergence of life, to be relatively low. Trouble is, because I(p) is so high, it follows from 4.0 that if I(q) is to be low then I(r), the information value of the physical regime, is necessarily very high. Relationship 4.0 being effectively a "zero sum game" expression  means that something has to soak up the information; either I(q) or I(r) or both. We are therefore always destined to be surprised by the extreme contingency nature flings at us from somewhere within equation 4.0. So, at first sight we seem to have an information conservation law expressed by 4.0.

Relationship 4.0 is in fact borne out by a closer look at conventional evolution, a process which somehow generates structures that in an absolute sense are highly improbable. Joe Felsenstein himself implicitly acknowledges equation 4.0 in his suggestion that the information for life is embodied in the physical regime we call the "laws of physics". (see here and here). If so then from 4.0 we infer that these laws must be a very improbable state of affairs and therefore of very high information. Evolution as it is currently conceived requires that this information expresses itself in what I call the "spongeam" about which I say more in this blog post. (Actually, my opinion is that the spongeam doesn't exist and that some other provision applies - more about that another time)

Equation 4.0 is beguiling: It seems to come out of some simple and rigorous mathematics. But it embeds an assumption. That assumption is that I(p) has a very high value because we assume from the outset a computation method which involves a serial "throwing of the die" as it were, a method which is going to require many, many conventional computational serial steps and therefore has a prohibitively high time-complexity as far as practice is concerned*2. But then if we have dice rather than just a die we can then have more than one trial at a time and the chances of creating life by chance alone increase, although it is clear that there would have to be an enormous number of parallel trials to return a significant probability of generating living configurations in this way. This multi-trials technique is effectively the brute force resort of the multiverse extremists. It is a fairly trivial conclusion that increasing the number of parallel trials has the effect of "destroying" information in that increasing trial numbers increases the probability of an outcome and so its information value goes down. Clearly in the face of huge numbers of parallel trials an outcome, no matter how oddly contingent it might be, is no longer a "surprise" in such a "multiverse". Not surprisingly this concept appeals to anti-theists who feel more at home in a Godless multiverse.

In the paper linked to in this post of the Melancholia I project I looked into the effect of increasing parallel trial numbers and in particular I considered the subject of expanding parallelism in the generation of outcomes. It's a fairly obvious conclusion that increasing parallel trials increases the probability of a result! But it also goes to show, perhaps a little less obviously, that information isn't conserved in such a context; in fact in this context information is effectively destroyed by the increasing trial numbers and in particular by expanding parallelism. This, sort of thing is likely to go down well with anti-theists because the "surprisal" value ( i.e. -ln(p) ) associated with outcomes is eroded, although of course anti-theists may still be surprised that such a multiplying system exists in the first place!

As I contend in this post multiverse ideas which posit a sufficient number of trials needed to destroy our surprise at the universe's amazing organised contingencies leaves us looking out on a cosmos whose aspect is empty, purposeless and anthropologically meaningless. And yes, I say it again; this kind of universe suits the anti-theists down to the ground; it seems to be the sort of universe they eminently prefer. But in spite of that there is something to take away from these multiverse ideas, in particular the idea of expanding parallelism, hinted at by quantum mechanics, and which  is evidence of the potential availability of huge computational resources. Given the concept of omniscience & omnipotence implicit in the Christian notion of God, positing the existence of these huge computational resources doesn't seem so outrageous. But in a Christian context the computational potential of expanding parallelism has, I suggest, purpose and teleology and is in fact evidence of a declarative seek, reject and select computational paradigm.

***

Although the -ln(p) concept of information used by Dembski succeeds in quantifying some of our intuitions about information it does have some notable inadequacies and it is these inadequacies which take us on to the subject of Felsenstein's Panda's Thumb post, namely Algorithmic Information theory. Let me explain...

Ironically I called the paper that explores the subject of expanding parallelism "Creating Information" rather Destroying Information. This is because my Melancholia I project is really about a concept of information very different to -ln(p). The need for this different concept of information becomes apparent from the following considerations. Although the function -ln(p) adequately quantifies our level of surprisal at outcomes this definition of information is not good at conveying the idea of configurational information. Take this example: The chances of finding a hydrogen atom at a designated point in the high vacuum of space is very small and therefore we have  a high information event here should it happen. But it is a very elementary event, an event which only conveys one bit of information: 'yes' or 'no' depending on whether a hydrogen atom appears or not. The trouble is that a one bit configuration is hardly what one would like to call a lot of information! Therefore we need something that is better at conveying quantity of information. From the function -ln(p) it follows that a one bit configuration can "contain" the same amount of information as a large n-bit configuration. This doesn't feel very intuitive, particularly if we are dealing with potentially large and complex configurations; it seems intuitively disagreeable to classify a complex configuration as possibly having the same level of information as a one bit configuration.*1

Algorithmic information theory attempts to measure the information content of a configuration via its computational complexity and this returns a measure of information which agrees with our intuitive ideas about the quantity of information found in a configuration, something that -In(p) doesn't necessarily convey.  However, using this concept of information we find that once again the IDists think they have stumbled on another information barrier that scuppers any creation of life by those inferior but dreaded "natural forces"! In the next post this contention will take me into the subject of my book on Disorder and randomness which also deals with algorithmic information theory. Once again we will find that expanding parallelism bursts through the information barrier. Although Joe Felsenstein and his buddies certainly won't need any help from me to engage the IDists I will in fact be using my own concept of Algorithmic Information to look into the IDist claims because it provides me with something immediately to hand.

There is one more ingredient that needs to be added to the mix to complete the picture of information creation and this is an ingredient which will certainly not be to the taste of anti-theists whose world view is one of a purposeless cosmos without teleology. I'm talking of my speculative proposal that the cosmos isn't working to some meaningless and mindless procedural process that just goes on and on and leads to nowhere but rather is operating some kind of purposeful declarative computation that uses expanding parallelism to seek, reject and select outcomes. It is in this context that the notion of specified information suddenly jumps into sharp focus; in fact I touch on this subject toward the end of the paper I've linked to in part 4 of my Thinknet project. (See section 11).

I have to confess that if the cosmos is using a purposeful declarative computational paradigm that makes use of expanding parallelism I'm far from having all the details: All I have currently is an understanding of the effect that expanding parallelism has on computational complexity and the metaphor of my Thinknet project which seems to have parallels with quantum mechanics; quantum mechanics looks suspiciously like a seek, reject and select declarative computation which taps into the resources of expanding parallelism. Contrasting alternatives to my conjectures are that we either have  the anti-theist's meaningless procedural multiverse or the primitive notion that God did indeed simply utter authoritarian magic words and via brute omnipotence was able to speak stuff into existence! The latter seems very unlikely theologically speaking: David Bump, who is a (nice) young earthist Christian I am currently corresponding with, has kindly and respectfully supplied me with a long document of his thoughts on what it means to be a Christian who sees God as creating things via spoken words  "as is" about 6000 years ago. Frankly I doubt it! As I have analysed David's arguments I have found that for me all this leads to huge theological problems and unless I turn to fideism these problems don't look as though they are going to go away! I will in due course be publishing my response to David.

Footnotes:
*1 A bit stream can carry a lot of information in the sense of definition 3.0 because its information value is a product of many probabilities and this may equate to a very small probability and therefore a correspondingly high information content. But the trouble with -ln(p) is that a one bit configuration could be equally as information laden. Another problem with -ln(p) is that once a configuration becomes a "happened event" and is recorded, all its information is lost. This is because probability is a measure of subjective knowledge and therefore once a large configuration becomes known ground, no matter how complex, it loses all its information.... a sure sign that "information" in this subjective sense is easily destroyed and therefore not conserved.

*2 There is also another assumption here (or perhaps it's a confusion rather than an assumption) that probability and randomness are identical concepts - they are not: See my paper on Disorder and Randomness. Dembski uses the principle of indifference in assigning equal probabilities to outcomes for which no prior knowledge exists as to why one outcome should be preferred over the other and hence the outcomes from this subjective point of view have equal probability. This procedure is correct in my opinion; but two outcomes which subjectively speaking have an equal probability are not necessarily equally random; randomness is an objective quality deriving from confrontational disorder. 

Tuesday, December 10, 2019

Moral Relativism





It is true that atheism doesn't set people up well to resist the intellectual pathologies found in the extremes of postmodernism and nihilism; these philosophies are like corrosive acids liable to eat away at not only one's grasp on rationality and truth. but also of one's morality. The only defence are the deep heart felt instincts supporting good community which, of course, many atheists feel as strongly as anyone else (See Romans 2:14-16). But other than having the status of being identified as strong social instincts there is little more these instincts have to commend themselves to the atheist world view other than in these social relativist terms; any cosmic absoluteness to morality (and even rationality) is in the final analysis completely lost. 

In this connection I was intrigued by a post on the de facto intelligent design web site "Uncommon Descent" by its supremo Barry Arrington in which he posted his response to the comments of two atheists named as Ed George and Seversky. These two talk about morality in the context of their atheism.  Here's the first part of Arrington's post: 

ARRINGTON: Ed George asserted that morality is based on societal consensus.  Upright Biped utterly demolished that argument.  See here.  Seversky and Ed tried to respond to UB’s arguments.

Let’s start with Sev:

"I, like everyone else here, would also want [the rape] to stop. Why? I should not have to say this but it is because we can imagine her suffering and know that it is not something we would like to experience nor would we want to see it inflicted on anyone else. It’s called empathy and its derived principle of the Golden Rule which, in my view, is more than sufficient grounds for morality."

MY COMMENT: Well done Seversky! Empathy is the ultimate (God given) rationale for morality as we shall see. It is this rationale which motivates the succinct expression of moral code embodied in the Golden Rule. One can hardly complain if Severersky carries out a thoroughgoing implementation of this rule (which of course no human being, apart from one, can do perfectly). But for a thoroughgoing atheist there is no ultimate reason why this Golden Rule should have any claim to an absolute status; after all, it is quite likely that on the basis of a minimalist survival ethic one can imagine social contexts where putting self first may be a "better" strategy (whatever "better" means in this context). Ironically an unbridled free market may illustrate the potential for moral perversity in a world without moral absolutes: For example some claim that rampant selfish self-betterment is supposed to lead to a wealth "trickle-down" affect, an affect which from a survival point of view benefits everyone. 

Nevertheless Barry has a good starter here for sharing and promoting  a common moral rationale and perhaps discussing what the origins of this rationale might be. But unfortunately he blows his chance:

ARRINGTON: This is a muddled mashup of two of the materialists’ favorite dodges.  First Sev appeals to empathy as the basis for morality.  He completely ignores several problems with this argument, including:

1.  Mere feelings are a very flimsy ground for a moral system.

2.  Some people do not have empathy (we call them sociopaths).  If empathy is the basis for morality, a sociopath has no basis for morality.


MY COMMENT:  Contrary too what Arrington is claiming here the existence of consciousness cognition (which is the context in which feelings have meaning) is the only ground for a moral system as we shall see.

Arrington inadvertently acknowledges the crucial moral role played by empathy in his reference to sociopaths; when it's not present things go badly wrong. Sociopaths have something about them which means they have no regard for the feelings of conscious cognition. To get an inkling of what it may be like to be sociopathic think of some of those realistic "shoot-em-up" computer games: Human game players have no compunction in shooting up gaming entities simply because there is no consciousness cognition to empathise with! In a sense human beings who live good moral lives outside the games environment turn into "sociopaths" of sorts when they play computer games in so far as they have no empathy (and rightly so!) for the simulated beings. These simulated entities have no consciousness and therefore no feelings. So consciousness changes everything. Perhaps it is not surprising that some atheists are inclined to deny the reality of the first person perspective of conscious cognition (as Arrington well knows - See here). For some atheists the reality of the first person perspective has just too much mystique; if there really is such a strange thing as a first person perspective inaccessible to third person observation then who knows perhaps there's even a......

But although empathy is the ultimate rationale for morality there is a difference between empathy and moral systems. Moral systems are there to best serve a society of conscious cognitions and therefore with out conscious cognition moral systems are without meaning, goals and purpose. Therefore moral systems are a means to an end rather than an end in themselves. For if human beings, like the facades of computer game entities, are a mere simulacrum with no first person perspective behind them then moral systems are purposeless and meaningless. A moral system is a code of behaviour that is cognisance of people's feelings in the context of community,

Moral systems, however, can be intellectually taxing: It is difficult for humans to anticipate all the ramifications of their behaviour in a social context and come to a reliable opinion on which moral systems best serve a community of interacting conscious entities. The moral challenge humans face therefore resolves itself into two challenges; firstly the challenge of raising a sufficient empathetic concern for other conscious entities and secondly the epistemic challenge of having to work out which moral system best serves community interests: Human beings, of course, are only capable of  imperfectly  responding to both challenges.  But even if we assume that a community is composed of perfectly empathetic beings anxious to get a moral system in place that best serves the community (clearly an idealistic assumption) there remains the problem that human epistemic limitations will imply they are unlikely to discover a moral code that best serves community.

The Golden rule is a neat one liner which sums up the spirit of moral systems but the complexity of community means that the devil is going to be in the detail; the system of moral code that best serves a community of conscious beings is going to be only fully understood if  one has divine omniscience. 

After Arrington's weak start, however, things improve:


ARRINGTON: 3.  Even for those with empathy, Sev offers no reason why they should not suppress their feelings if they believe the pleasure of their act exceeds the cost of the act in pangs of empathy.

Next Sev appeals to the Golden Rule as a ground for morality.  Well, Sev, it certainly is.  Yet, materialism offers no ground on which to adhere to the Golden Rule as opposed to any other rule such as “might makes right” or “if it feels good do it.”  Sev demonstrates yet again that no sane person actually acts as if materialism is true.

Sev, if you have to act as if your most deeply held metaphysical commitment is false as you live your everyday life, perhaps you should reexamine your metaphysical commitments.

MY COMMENT: Arrington's third point above does make headway: On what basis, other than ephemeral instinct, should anyone be troubled by the consciousness of other human beings. and instead live for self? For if as some atheists maintain consciousness is just an illusion constructed from a complex social interface why bother with it and instead just play as if one is in a computer game? But contrariwise I suppose all said and done an atheist could still claim that whilst moral instincts and code have no real absolute cosmic significance this doesn't stop people behaving instinctively with empathy and using the Golden rule. Let's hope that remains the case..... there is an unfortunate human history of principles based on bad ideology over ruling compassion all the way from the Nazis, through Christian fundamentalists to the French and October revolutions.

ARRINGTON: Now let’s go to Ed, who writes:

. . . UB’s question is not worth responding to

Ed states that a person who lives by himself has no moral obligation to anyone who venture near him.  UB points out that if that is true, Ed has just given said loner a license to rape any woman who ventures too near without breaking any moral injunction.  Instead of abandoning his screamingly stupid assertion, Ed pretends UB’s extension of Ed’s premises to their logical conclusion is “not worth responding to”. Ed is not only stupid.  He is a coward.

MY COMMENT:  ... or alternatively what does the loner do if he sees someone who desperately  needs help? (for example a child drowning in a pond? - assuming the loner can swim). Here we have an example of how the futility and purposelessness  implicit in atheism can have a corrosive affect on one's sense of what is right.

But don't let anyone go away thinking that I'm suggesting that it is only atheists whose morality is subject to corruption: As we well know those who think they have a moral code sanctioned by divine authority and go on to implement it without cognizance of the first person perspective, are also liable to corruption; especially so if they think their reading of scripture provides an all but direct, easy and utterly certain revelation of the divine will. Whether a moral system is arrived at from first principles or based on an interpretation of Holy Writ, the fact is you can't trust human beings to get it all right!


Relevant Link
https://quantumnonlinearity.blogspot.com/2018/05/the-foundation-of-morality.html

Friday, November 22, 2019

Thinknet, Alexa and The Shopping List. Part I




In a "Computerphile" video my son Stuart Reeves explains in high level functional terms the stages involved in "Alexa" parsing, processing and responding to verbal requests. In the video he starts by asking Alexa what may seem a fairly simple question:

Alexa, how do I add something to my shopping list? 

Alexa responds (helpfully, may I say) by regurgitating the data on "Wikihow". But Stuart complains "This is not what I meant! It's a very useful thing if you didn't know how to make a shopping list, but it's not to do with my shopping list!". It seems that poor Alexa didn't twig the subtle differences between these two readings: a) Alexa, how do I add something to my shopping list?  and b) Alexa, how do I add something to a shopping list? Naturally enough Alexa opts for the second generic answer as 'she' has no information on the particularities of the construction of Stuart's shopping list. 

Important differences in meaning may be connoted by a single word, in this case the word "my". Moreover, whether or not this word actually impacts the meaning is subject to probabilities; somebody who wants a generic answer on how to construct a shopping list may have an idiosyncratic way of talking which causes them to slip in the word "my" rather than an "a".  If the question had been put to me I might at first responded as Alexa did and miss the subtlety connoted by "my". However, depending on context "my" could get more stress: For example if I was dealing with a learning difficulties person who was known to need a lot of help this context might set me up to understand that the question is about a proprietary situation and that the generic answer is inadequate.

Stuart's off-screen assistant is invited to put a question to Alexa and he asks this: "Alexa, what is 'Computerphile'?". Alexa responds with an explanation of  "computer file"!  It is clear from this that poor old Alexa often isn't party to essential contextual information which can throw her off course completely. In fact before I saw this video I had never myself heard of "Computerphile" and outside the context of the "Computerphile" videos I would have heard the question, as did Alexa, as a question about "computer file" and responded accordingly. But when one becomes aware that one is looking at a video in the "Computerphile" series this alerts one to the contextualised meaning of "Computerphile" and this shifts the semantic goal posts completely. 

On balance I have to say that I came away from this video by these two computer buffs, who seem to get great pleasure in belittling Alexa, feeling sorry for her! This only goes to show that the human/computer interface has advanced to the extent that it can so pull the wool over one's eyes that one is urged to anthropomorphise a computer, attributing to it consciousness and gender!

Having run down Alexa Stuart then goes on to break down Alexa's functionality into broad-brush schematic stages using the shopping list request as an example.

It is clear from Stuart's block diagram explanation of Alexa's operation that we are dealing with a very complex algorithm with very large data resources available to it. Although some of the general ideas are clear it is apparent that in the programming of Alexa the devil has been very much in the detail. But as we are in broad brush mode we can wave our hands in the direction of functional blocks declaring that "This does this!" and leave the details to be worked out later, preferably by someone else!

***

The precise example Stuart unpacks is this:

                                             Alexa could you tell me what's on my shopping list?

Although it is clear the "Alexa" application makes use of a huge suite of software and huge data resources, I would like to show that there is in fact a very general theme running through the whole package and that this general theme is based on the "pattern recognition" I sketch out in my "Thinknet" project. 

***

An elementary Thinknet "recognition" event occurs when two input tokens results in an intersection. The concept of an intersection is closely related to the concept of the intersection between two overlapping sets. For example, items which have the property of being "liquid" and items which have the property of being "red", overlap  under the heading of "paint".  In  its simplest form a Thinknet intersection, D, (such as 'paint')  results from two input stimuli A & B (such as 'liquid' and 'red') and this is what is meant by a Thinknet "recognition" event. We can symbolise this simple recognition process as follows:

                                                                                           [A B]  → D
1.0

Here inputs A and B  result in the intersecting pattern D. (In principle it is of course possible for an intersection to exist for any number of input stimuli, but for simplicity only two are used here) If we now  input a third stimulating pattern C and combine it with the intersection D we represent this as:

[AB] C → DC
2.0

Since A and B have resulted in the selection of the intersecting pattern D we now have D and C as the stimuli which are candidates for the next level intersection between D and C; if indeed there exists an intersection for D and C. If this intersection exists (let's call it E) then the full sequence of intersections can be represented thus:

[AB]C → [DC] → E
3.0

As an example the stimuli D might be "tin" and together with "red" and "liquid" this might result in the final intersecting pattern E being"tin of red paint".

Expression 3.0 could be equivalently written as:

[[AB]C]  → E
4.0

The square brackets are used to represent a successful intersection operation and in 4.0 the bracketing is applied twice: First to AB and then to the intersection of A and B and the stimuli C. The simple construction in 4.0 is now enough to give us indefinitely nested structures. For example, if we have the general pattern:

ABCDEFG

5.0

Let us assume 5.0 has strong intersections which result in a full recognition event. To exemplify this full recognition event using square bracket notation we may have for example:

[[[ABC]D]E[FG]]

6.0

This nested structure can alternatively be represented as a sequence of intersections: The first layer intersections are:

[ABC] → K and  [FG] → M

7.0


The second layer intersection (only one in fact) is:

[K D] → L
7.1

The third layer intersection combines the output of 7.0 and 7.1 with the residue E in 6.0 as follows:

[LEM] → J

8.0

...this would mean that pattern 5.0, if the intersections are strong enough, implies the likely presence of the pattern J.

The operation of forming intersections is not unlike that of searching for pages on the web where tokens are input and these tokens then define a list of pages which contain these tokens: If the combination of input token has sufficient specification value it will narrow down the search to just a few pages. However, there is a difference between web searches and Thinknet in that in Thinknet the intersection itself is then submitted along with other patterns to the next layer of intersection problem solving resulting in the nested structures we have seen above.

For advanced intersections to occur it is clear that Thinknet must have available a large data resource which effectively embodies the category information required to reach intersections. This large data resource actually takes the form of a weighted association network and this network is way of storing category information. How this network is formed in the first place is another story.

The forgoing examples give us a framework for discussing pattern recognition, Thinknet style. But as we are in broad-brush mode we can ignore the tricky details needed to get something to work really well and instead only talk about the general idea. 


***

If we take a context where we have the input of an unordered collection of stimuli like A, B, C, and D, we may find that at first attempt these fail to form an all embracing intersection and therefore Thinknet fails to recognise the whole context. For example:


[A, D]  [C B]

9.0
Or  expressed in more explicit terms:

[A D] → E  and  [C B]  F 
10.0

Here  [A D] and [C B] have generated intersections E and F. But the intersection formation on E and F has failed to go any further. There are however at least two ways in which this "intellectual blockage" may be circumvented. The high level goal seeking programmed into Thinknet means that it lives to understand and for Thinknet "to understand" means forming intersections. It is good, therefore, if there is more than one way of reaching an understanding.

***

One way to relieve the deadlock expressed by 9.0 may be to "contextualize" the problem by adding another stimulus; let us call this stimulus G. Hence, the input becomes A, B, C, D, G. Adding G may result in a complete solution as represented by the bracketing below:



[[[A, D] G]  [C B]]

11.0
Expressed explicitly in terms of intersection layers:

First layer of intersections:

[A D] → E  and [C B]  F   .....as before - see 10.0 
12.0

Second layer intersection:

[E G] → H 

13.0
...here. the introduction of G means that E and G combine to generate the new intersection H.
The third layer of intersection generation results in full resolution:

[H F] → J
14.0

Thus the result J represents a complete recognition of the inputs A, B, C, D and the contextualising input G.

***

A second way which may relieve the deadlock requires a bit more sophistication. In Edward De Bono's book The Mechanism of Mind, a book on which much of my Thinknet thinking was based, we find a way of getting round this "mental blockage"; I call it a "mental blockage" because on re-submission of problem 9.0 Thinknet as it stands would simply generate the same result. But it wouldn't necessarily generate the same result if the "state" of Thinknet changed slightly every time a problem was submitted. This is achieved by ensuring that when a pattern is activated as a result of an intersection that pattern subsequently needs a greater signal threshold to activate it next time. This means that it may fail to become an intersection on the second attempt. and this may open the way for other patterns with a lower activation threshold to become intersections instead.  

For example, let us suppose that E and F as in 12.0 fail to become intersections on second attempt (or it may take third or even forth attempts) as a result of their thresholds being raised and instead we find a complete intersection solution forms as follows:

[[A D C]  B]
15.0

Or in terms of intersection layers:

First layer:
[A D C] → G 
16.0
Second layer:
 [G B] → H
17.0

The dead lock expressed by 9.0  has been broken by the  threshold increases on E and F, preventing them becoming intersections on later tries; it's a bit like raising land to prevent water pooling and stagnating at the same place by forcing it to pool elsewhere. The important point is that because the path of least resistance has become blocked by increasing thresholds Thinknet has found a new route through to a complete solution. Another way of thinking of the raising of thresholds with use is as a kind of "boredom" factor which encourages Thinknet to move on and try elsewhere. 

***

When I worked on the Thinknet software I got as far as programming nested intersections, but what I didn't do was add threshold changes as a function of use; this would effectively have given Thinknet a feedback loop in so far as outcomes would effect thresholds and the thresholds would effect future outcomes. Adding such a functionality would open up a vista of possible devilish detail: In particular, making the feedback non-linear would introduce the potential for complex chaotic behaviour. If we can think of a Thinknet "thinking session" as a session involving repeated re-submission of an intersection problem, a nonlinear Thinknet would never quite return to its starting state. This would turn Thinknet into a system which searched for intersections by changing its state chaotically: In so far as chaotic evolution is a way of ringing-the-changes (chaotically) Thinknet becomes an intersection search engine. Thus, the more time Thinknet spends "thinking" the more chance that a complete intersection solution pops out of the chaotic regime that thinking entails. But I must add a caution here. A chaotic Thinknet is far from an exhaustive and systematic search engine; its driving energy is chaos, a process which approximates to a random search - it is therefore not an efficient search. But one thing it is: A chaotic Thinknet is "creative" in the sense that it has the potential to come up with unique solutions where the search space is too large, open ended or ill defined to embark on a search via a systematic and exhaustive ringing-of-the-changes. 

I will be using my sketch of Thinknet principles (and how it could work if the details were sufficiently worked out) to discuss if and how Alexa's task, at all stages, can be formulated in terms of Thinknet style pattern recognition. The general nature of this style of pattern recognition encapsulates a simple idea: Thinknet uses a few input clues which, like a Web search engine, narrows down the field by "set fractionating"; that is, where multiple "Venn diagram" sets overlap,the resulting subset may be very narrow. However, where Thinknet differs from this kind of simple search is in problem re-submission and its non-linear "thinking" process. But I must concede that this underlying chaotic searching may not be suitable for an Alexa type application because there is probably a demand for predictable and controllable output results when it comes to slave systems like Alexa. In contrast a fully fledged Thinknet system has strong proprietary goal seeking behaviour and is orientated towards seeking idiosyncratic, creative and unpredictable intersections rather than finding servile answers through an exhaustive systematic search.  In short, it's too human to be of use!

Monday, October 21, 2019

Extracts from a blog post by William Dembski


On June 10th 2016  I published a blog post which included a commentary on a post by “Intelligent Design” guru William Dembski (Pictured). The title of Dembski’s post piqued my interest immediately Viz: Disillusioned with Fundamentalism. It told the story of Dembski’s demise as a lecturer at the fundamentalist leaning Southwestern Baptist Theological Seminary (SWBTS) in Fort Worth. 

I’ve said it before and I’ll say it again; ID guru William Dembski not only gives every impression of being a nice guy with a strong Christian faith but I think the implications of his work deserve serious attention. However, like some of the other nice guys I’ve mentioned in my blogs (see herehere and here) Dembski has ended up getting the rough end of the deal. If my reading of the situation is right then poor Dembski has fallen between two stools: It seems that the respected Baylor Baptist University found him “too fundamentalist” whereas more recently his ex-employer, Southwestern Baptist Theological Seminary (SWBTS) in Fort Worth, Texas found him not fundamentalist enough with the consequence that Dembski has swung away from fundamentalism. (That’s not such a bad thing!). Wiki has an item on the Baylor controversy, but the details of the later contention at SWBTS surfaced on Dembski’s blog.  Should this post of Dembski ever go off-line, I have a copy of it here

In this PDF document I reproduce the parts of Dembski’s post that I quoted in my 2016 blog post (In fact this introduction is largely taken from that post but my commentary of 2016 has been removed). As I did not capture all of Dembski’s original blog post it may have become a little disjointed, although I think there is sufficient to tell the story successfully.

The trouble at SWBTS started for Dembski after the publication of his book The End of Christianity, which according to Wiki:

…. argued that a Christian can reconcile an old Earth creationist view with a literal reading of Adam and Eve in the Bible by accepting the scientific consensus of a 4.5 billion year of Earth.[43] He further argued that Noah's flood likely was a phenomenon limited to the Middle East.[44]

In my PDF I reproduce the parts of Dembski’s SWBTS story that I have in my possession. Where I’ve had to interject to join disconnected parts of the story I have used non-italics in a large point size.



Postscript 6/11/2019
Dembski has had his share of censorship and ill-treatment by the liberal-left academic establishment. It is therefore not at all surprising that he should feel pushed toward the right and that we find him defending right-wingers, perhaps even identifying with them. See his posts here and here where he speaks out against the powerful mainstream marginalising the anti-vaxers, Tommy Robinson, Milo Yianopolous and Breitbart. I've no idea whether Dembski sees eye to eye with such people, although that said I think Dembski's experience has lead him to be sceptical of vaccinations: He wrote an email telling Jef Bezos  of Amazon that he is "really getting pissed off" with how Amazon is censoring videos on the vaccination question.. Although I find Robinson, Yianopolous, Breitbart and much anti-vax conspiracy theorism objectionable he may have a valid concern on the subject of censorship by a powerful elite. 

But let's get this straight: Although I have a healthy respect for polarised parties such as Dembski and on the other side people like P Z Myers, the state of cognitive war between them is such that if I had the privilege to meet either of them on the wild web I would make sure my figurative gun was at the ready!




Saturday, September 28, 2019

Evolution: Naked Chance?

Image result for meccano set jumble
A set of basic construction parts; throw in a few gluons (i.e. nuts and bolts) and then agitate randomly. Will this mix generate self-replicating, self-perpetuating configurations? There is a way of doing it but is this The Way it happened in our cosmos? (Probably not!)

This post is a response to biochemist Larry Moran's essay entitled "Evolution by Accident".  In his essay he leans toward the view that randomness  and serendipity play a big very big role in evolution. He sets himself against the natural selectionists whom he characterises as supporting a "non-random" view of evolution. In his conclusion he writes:

I've tried to summarize all of the random and accidental things that can happen during evolution. Mutations are chance events. Random genetic drift is, of course, random. Accidents and contingency abound in the history of life. All this means that the tape of life will never replay the same way. Chance events affect speciation. All these things seem obvious. So, what's the problem?
The "problem" is that writers like Richard Dawkins have made such a big deal about the non-randomness of natural selection that they risk throwing out the baby with the bathwater. A superficial reading of any Dawkins' book would lead you to the conclusion that evolution is an algorithmic process and that chance and accident have been banished. That's not exactly what he says but it sure is the dominant impression you take away from his work.

Here's another example of the apotheosis of chance in Moran's writings:

What about Monod's argument that evolution is pure chance because mutations are random? Doesn't this mean that the end result of evolution is largely due to those mutations that just happened to occur?

There is no need for me to take sides in this debate between evolution by natural selection and evolution by "pure chance". In fact for the sake of my argument I could proceed under any mix of natural selection and so-called "pure chance".  What I want to show here is that yes, current notions of what drives evolution entail a big random factor, but it is only one aspect of evolution and there are other aspects which are even more significant.

My point will be this: Evolution is driven by chance, but it certainly isn't naked chance; in fact overall the process is very, very far from naked chance. But reading Moran's essay one could be forgiven for thinking he's pushing naked chance too far and has no awareness of how ordered, a priori, the world must really be for evolution to work. The second quote above from Moran would be less misleading about the process of evolution if "pure chance" were replaced by "chance". Evolution cannot be pure chance as we shall see. Moran is either oblivious to this fact or sees it as not worthy of note; but I see it as highly significant, in fact evolution's most significant feature.  The fact of the matter is that the chances in the process of evolution must be subject to a highly constrained envelope if it is to work.

This apparent obliviousness of Moran to the organising envelopes which must constrain the chance diffusion of evolution may trace back to Moran's very partisan form of atheism and the related sentiments I identify in my post on the many worlds cosmology: The purposeless backdrop of the atheist world-view has grave difficulties in making sense of any built-in cosmic bias or preference toward certain states of affairs; in particular an ordered status quo. This bias leads to tricky questions like why this and not that?  These questions in turn may raise what to some is the demon of "purpose" and from there it is short walk into at least a conjectured theism. Partisan atheism finds any contingent bias in the cosmos uncomfortable and finds it easier to handle a cosmos where everything is evenly favoured with the "butter" of probability spread uniformly, thus betraying no sense of preferred statuses in the cosmic dynamic (See here).

My arguments follow even if natural history is driven by some mechanism completely different to standard evolutionary mechanisms. This is because my argument is about logic; that is, it is of the kind "If so & so then it follows that......etc".  Where "so & so" stands in for "standard evolution" which may not actually hold good (and I must confess I have my doubts!). What concerns me here, however, is what follows if we assume standard views of evolution. Therefore the following proof can be advanced without placing intellectual stakes in particular evolutionary mechanisms as they are currently conceived

***

In my post on The Mathematics of the Spongeam I used the following equation as a way of talking about evolution:



This is basically a diffusion equation with an added term and where Y represents a population value subject to diffusion in a multidimensional space. The first term on the right hand side is the ordinary diffusion term expressed in n dimensions and this is a way of representing a random walk across configuration space. The second term introduces the net result of multiplication and death at a particular point in configuration space. As I pointed out in my previous post on this equation, although it encapsulates many complexities it is a huge simplification of reality; for example the diffusion constants in front of the "house2" symbol could vary with dimension and coordinate. Moreover, as  I also pointed out in my previous post on this equation, as it stands it doesn't explicitly acknowledge an important potential non-linearity. Viz: that if V depends on the environment then it is clear that the value of Y is part of that environment. Therefore V is not just a function of the coordinate system but also of Y. However, in spite of these simplifications to the point where the whole equation is almost a cartoon I can nevertheless use it to express my problems with Moran's brand of thinking.

The diffusion term in the above equation expresses that important feature of evolution; namely, that it proceeds in small random steps. Given this picture one thing becomes very clear: It is fairly intuitively obvious that pure random walk would never produce what we are looking for in terms of the highly organised complexities needed for self perpetuating, self replicating structures. If in the very remote chance any organised complexity was arrived at via random walk that walk would ensure that it would very quickly dissolve back into the sea of general randomness.  For organised structures to have a chance of persisting for any length of time we need the second term on the right hand side of the equation, that is Y. Where configurations have a net replication V will have a positive value; where populations have a net decay V will have a negative value. However, since we expect that the number of configurations which have sufficient organised complexity to return a net value of V to be very small compared to the whole of configuration space then this doesn't give us very many viable self replicating configurations to spread across configuration space. Hence, relatively speaking configuration space will be almost empty of self replicating configurations; we know this simply because a successful replicator will clearly have to be sufficiently organised and the number of organised structures in the whole of configuration space constitutes a very tiny percentage of the whole of configuration space  (See my book on Disorder and Randomness).

Now here's the rub: Since evolution must start from square one (i.e no replicators) the set of self perpetuating, self replicating configurations must be fully connected all the way from the most elementary replicators to sophisticated organisms. This requires V to be such that it forms channels in configuration space favourable to replication, along which evolutionary diffusion can diffuse. To assist the imagination in visualising this connected multidimensional set I use my picture of the spongeam. Viz:



The above is a three dimensional visualisation of what is in fact a multidimensional object; it is  system of very tenuous fibrils spanning across largely empty space. The spongeam may or may not actually exist, but we at least know this; it is a necessary pre-condition of evolution, at least evolution as it is currently understood.

This picture of the spongeam actually prompts a pertinent question: If replicators, which necessarily have to be highly organised, are so utterly dwarfed in number by the immerse size of configuration space, are there actually enough of them to populate configuration space with a class of points sufficiently connected to allow evolution by diffusion from square-one to advanced and complex organisms? I have my doubts but for the sake of the argument we will proceed as if there is a spongeam sufficiently connected to allow diffusional migration from simple structures to complex self replicating organisms.

Whether a spongeam exists to facilitate evolution depends very much on the form of V.  It is possible to "cheat" and simply patch pathways and channels into configuration space ad hoc style to ensure that a wide range of replicators can be reached by diffusion. But such special pleading doesn't seem to be how our cosmos works; its fundamental parts are more like a construction kit such as meccano where a few fairly elementary parts and their "fixing" rules are specified and we are then left with the problem of whether such a system, given the diffusion dynamic, can build or "compute" general replicators. Let me say again: I have my doubts, but that's really another story**. Suffice to show here that given current understandings of evolution Larry Moran's assertion about it being a process of "pure randomness" is entirely misleading.

In the spongeam picture the difference between natural selection and Moran's emphasis on the randomness of the evolutionary walk is a fairly minor difference: In Moran the neutral diffusive evolutionary  "channels" of the sponeam are akin  to "level" pathways where there is no bias pushing in a particular direction. Natural selection on the other hand corresponds to the case where the pathways have a kind of slope by virtue of a changing slope in V which means that the random walk is a walk with a bias in a particular direction and therefore proceeds more rapidly in that direction. But as far as my equation is concerned this is just a variation on a theme and evolution may proceed under both circumstances. (The irony is that if evolution has occurred then I'm very favourable to Moran's concept of evolution!: It minimises the idea of evolution as a fight for survival and instead something more like the drifting apart of languages  when communities are separated)

But whatever! My the main point is that all this exposes Larry Moran's misleading view on evolution; for whether evolution is neutral or biased both scenarios require a spongeam envelope that introduces a considerable information constraint; that is, evolution is a process which in this sense is far from maximum randomness and presupposes a highly organised constraint spanning configuration space.

I'm not here arguing whether or not the spongeam actually exists in the real world: Rather I want to make the point that if it does exist - as it must if evolution has occurred as currently conceived - it does no justice whatever to describe the process of evolution as if it is pure randomness at work: Pure randomness would lead to nothing: If the spongeam does exist in our cosmos then presumably its convoluted network of fuzzy pathways is determined by the set of fundamental particle interactions.

That evolution must start with a huge information bias has been proved generally by William Dembski. Dembski has been unjustifiably abused for his efforts; a sign of the worldview stakes involved in the debate. Although I don't accept the inference that some IDists have thence drawn from Dembski that information can't be created, Dembski's result is sound. I present a back of the envelope proof of this theorem in this paper.


Relevant Link:
http://quantumnonlinearity.blogspot.com/2018/01/evolution-its-not-just-chance-says-pz.html

Footnote
** The de facto "Intelligent Design" community are quite dogmatic on this point: For them there is little or no doubt that given the cosmic physical regime evolution, as currently understood, is impossible and that "Intelligence" of some sort (they don't elucidate what sort) needs to step in somehow (they don't elucidate how) and do its stuff of filling in the creative gaps in the creation story. The irony is that what raises a question mark over their thesis is the very existence of the super-intelligence they posit. For if that intelligence is none other than an omniscient omnipotent God then who knows what such a being is capable of: For all we know there may be a set of particle interactions which imply a spongeam and that God may have chosen that set! (although as I must repeat, I am myself doubtful about the existence of the spongeam - but I might be wrong!)