Monday, July 24, 2023

North American Intelligent Design's response to my last two posts. Part 1


Unfortunately, the NAID view of Intelligent Design invites
this sort of mockery. They've only got themselves to blame. 


I'm going to critique the following two posts on Evolution News. They are relevant to the points I raised in my last two posts (See here and here). 

Physics, Information Loss, & Intelligent Design | Evolution News

But I must start by criticizing this post:

Is Life an Information Ratchet? | Evolution News

...which is referenced in the first link.

The author of these links, Eric Hedin, hasn't defined exactly what he means by "information" but instead assumes we know what he, and presumably his ID tribe, means by it. For Hedin the connotations of the label "information" are proprietary to him and his NAID tribal group and these meanings can be resorted to at any time by NAID protagonists to undermine attempts to bring down a NAID target of unknown whereabouts. 

***

So, starting with the second link, Hedin writes:

HEDIN: An information ratchet would be some mechanism or process that causes the information content of a system to increase with the passage of time and prevents or limits its decrease. Key to understanding any ratchet mechanism is to grasp that its performance is predetermined by its mechanism

MY COMMENT: This statement is telling us what the average NAID is never expecting to find: A "natural" ratchet which entails an increase in "information"; whatever "information" means in this context. Algorithms do exist which increase "information" in the Shannon meaning of the term. Viz: Simple algorithms (such as binary counting) can be developed which, starting from configurational uniformity, systematically work their way through a gamut of configurations and so eventually arrive at random configurations (See here for a definition of randomness). An observer with no knowledge of the background algorithm would then, on the basis of Shannon's definition of information, see these complex random sequences as having maximum information. (See here for the definition of probability).  On these definitions, then, we would have an example of information being created. The predetermination of these systems is neither here nor there: The appropriately uninitiated observer sees information being created (in the Shannon sense)


HEDIN: Do natural ratchets exist in the physical, non-living world? Examples of natural mechanisms that approach the specific functionality of human-designed ratchets seem to be lacking. We might, however, claim that gravity is a sort of natural ratchet seen on Earth, in that it moves objects down and limits them from moving up. However, its target direction is only generally located, so that material may take any circuitous route in moving to a region of lower elevation, and that region could be anywhere on, or even within the planet.

 While acknowledging the ratchet-like effect of gravity to move objects down (to a lower gravitational potential) we must avoid the error of attributing additional abilities to this natural ratchet-like phenomenon. For example, while gravity can cause rocks to slide down a mountain slope, it cannot assemble those rocks into a castle in the valley. Why not? Simply because the mechanism is not designed to accomplish this task.

MY COMMENT: Firstly, we can say that configurational ratchets do exist in nature: For under the right conditions of temperature, pressure and concentration, crystalline configurations capture atoms which then have a low probability of leaving the structure. In this change the high disorder of a solution of molecules (for example of salt) is changing to one of a highly organized crystal. This doesn't violate the second law because the lower energy of the crystal means it loses heat to the environment which in turn implies that the combined system of crystal and heated environment has a higher statistical weight; that is, the overall entropy of the total system, crystal plus environment, has increased. 

When it comes to the creation of fullerenes, we have a "natural process" which can create relatively sophisticated structures: It would be quite a challenge for humans to design and engineer a system that generates Buckminster structures artificially. It goes to show that the question of whether so-called "natural processes" (sic) have been designed to generate configurations poses itself straight away as soon as we look at nature; in fact even before we start pondering the question of organic configurations. Unfortunately, however, the NAID's flawed explanatory filter imposes a "natural forces" vs "Intelligent Design" dichotomy at this point which means that the filter does not naturally pick up on the possibility of design in the very fabric of reality. This seems to be the result of an affectation in NAID culture to retain the gloss of a "scientific" community by not mentioning "God", a being who transcends the fabric of reality (although "God" is, in fact, implicitly recognized in NAID subtext).  In NAID philosophy the unidentified intelligence (which could be little green men) plays the role of a kind of auxiliary agent of physical causation to be invoked when attempts at explaining configurations in terms of so-called "natural forces" fails. The NAIDs are looking for evidence of direct intelligent action involved in the initial creation of organic configurations. In contrast highly organized crystals are clearly generated by known physical conditions and therefore the NAID explanatory filter classifies them as caused by "natural forces" and not intelligent agency; but for a transcendent theist such as myself, this classification is clearly wrong. 

The NAIDs would be very wary of the suggestion that if a transcendent omniscient omnipotent God can create a cosmos which generates crystals then perhaps he's gone a whole lot further and designed a cosmos which generates life. To admit that the "natural" cosmos may be provisioned to generate life would be a betrayal of the NAID paradigm; namely, that those "natural forces" are "blind" and ineffectual as far as the generation of life is concerned and must be contrasted against the mysteries of "intelligence".  This in, my opinion, may be underrating the provisions of God's creation. 

But to explicitly mention God in a cosmic design context is all but an embarrassment in NAID circles because they affect to style themselves as making a purely scientific point. They are then inexorably drawn by the logic of their precepts to the conclusion that intelligence itself, even in human beings who are manifestly part of the material creation, is somehow transcendent of those so-called "natural forces".  


ERIC HEDIN: Random outcomes of a rockslide might include one slab-shaped rock leaning against another, resembling a lean-to, but any structure with the complexity of stonework typically seen in a castle could never happen within the spacetime limits of our universe. For most people, this is common sense. As Douglas Axe writes in his book, "Undeniable", our design intuition is correct regarding the improbability of functionally coherent outcomes (such as a castle) occurring by chance. For such complex, functional results, the ratio of “correct” outcomes to “incorrect” outcomes is too small to be obtained by any undirected process in a finite universe such as ours.

MY COMMENT: Although it is clear that God's creation is equipped with some sophisticated configurational ratchets which generate things like galaxies, stars, planets, and crystals this is unlikely to satisfy the NAID's demand for a rachet which increases organic "information". But what does "information" mean in this context? If we use the well-established Shannon definition of information (That is -log [Prob]), it actually comes up with different answers depending on how we apply this definition.

If we are talking about the unconditional probability of configurations like crystals, then it would follow that because crystals as a class have a minute statistical weight then the unconditional probability of a crystal appearing is vanishingly small and correspondingly crystals have a very high information content. That is: 

Prob  = P(crystal class) ~ very small  => high information

Equation 1

Ergo, a big crystal is unlikely to form in the lifetime of universe if the cosmos was a purely random affair.  But in contrast if we are talking about the conditional probability of a crystal; that is given the laws of physics and the right environmental conditions etc (i.e. the physical conditions), this probability is very high and therefore a crystal then has a low information. That is:

Prob  = P(Crystallisation, Right physical conditions) ~  high => low information

Equation 2

 So, as far as the atoms/molecules of the crystal are concerned we have a ratchet here which locks in what would otherwise be a high information configuration. Because Shannon information is based on probabilities and probabilities are observer relative then whether or not a crystal is a high information configuration depends on one's point of view: Coming to a crystal without any knowledge at all we find that it has a very low probability and therefore high information: In this context crystallization has effectively ratcheted in a high information configuration. But once we become aware of the nature of the physical conditions a crystal then assumes high probability and therefore a low information. 

But now consider these two equations:

Prob  = P(organism)  ~ very small=> high information

Equation  3

Prob  = P(Organism, Right physical conditions, i.e the physics of the reproductive system) ~  high

 => low information

Equation 4

Cleary, again, the extremely low statistical weight of the class of organisms means that the unconditional probability of life (Equation 3) is vanishingly small: We don't expect an organism to assemble itself from a purely random physical regime. In that sense organic forms have a high (Shannon) information content (or high "surprisal" values as it is sometimes expressed). However, given the right conditions (i.e. Physics and a reproductive system) a new organism has a high chance of forming. Under these circumstances life has high probability and therefore low information. 

The physical ratchet which creates crystalline structures given some fairly basic physical conditions is one thing, but the physical ratchet which creates an organism from a reproductive system is quite another. A reproductive system is a highly sophisticated configuration which itself has a vanishingly small unconditional probability; that is, it is a high information system. 

It is relationships like equation 4 which has led NAIDs to believe there is such a thing as a conservation of information: That is, in order to create sophisticated configurations of life one must already have in place a sophisticated generation configuration, which because it has a vanishingly small probability will have a correspondingly a high information. So, lowering the information of an otherwise improbable outcome comes at the expense of another high improbability; in this case a sophisticated life generating engine in the form of a reproductive system. At this juncture the conservation of information that Hedin and other NAIDs promote seems plausible. But recycling the words of H G Wells ' Time Traveller at the end of chapter 6 of Wells' The Time Machine

"Very simple was my explanation, and plausible enough - as most wrong theories are!

...as we will eventually see!


ERIC HEDIN: Could a natural information ratchet exist? Since our goal is to understand whether life is an information ratchet, we first need to examine what kind of mechanism might be required to cause a living system to ratchet up its information content over time. To increase information means to select outcomes that correspond to a greater level of functional or meaningful complexity. The only way for this to happen is if the selection mechanism (in other words, the ratchet) is designed to produce the target outcome, and this means that the mechanism must already contain the information specifying the target. A physical mechanism cannot produce any information beyond what it already contains.

MY COMMENT: Here Hedin talks of "functional or meaningful complexity" which I assume is his way of trying to distinguish between the hyper-complexity of randomness and those strange organic configurations which are at once both complex and yet organized. But rather than use vague ideas such as "functional or meaningful complexity" we need something a little more solid. What living configurations are all about is more clearly and less mysteriously expressed as this: The main feature of the molecular configurations of life is that given a particular environmental context they are capable of self-maintenance and self-multiplication in that environment. Therefore, should these configurations come about, repeat, should they come about, then their self-perpetuating properties lock them in: In short, they constitute the "teeth" of the ratchet we are looking for. In fact, as we know from observation of the organic world there are many, many of these self-perpetuating "teeth" occupying configuration space, and these range from single cells to huge communities of cells in symbiotic relationship. But creating self-maintaining and self-multiplying configurations from scratch would challenge the intelligence of any human designer; such configurations are fine examples of complex organization; (that is, they are configurations which occupy at once the world of high organization and at the same time the world of high informational complexity as per randomness). The question Hedin is really trying to ask is this: Has God created a physical world with an information ratchet which favours the emergence of living configurations?  This would mean that the conditional probability of life is well above its absolute unconditional probability; Well, we've got the teeth but what we also need to know is this: Have the teeth been arranged into a ratchet? More technically we can put the question thus: Does the spongeam exist with a sufficient diffusion dynamic to allow diffusion to permeate the structure of the spongeam

Self-perpetuating configurations are highly organized. That is as a class they have a very low statistical weight, and this means as a per equation 1 that we can say this of their unconditional probability: 

Prob  = P(class of self perpetuating structures)  ~ very small => high information

Equation 5

...and if our physical regime is to have a realistic chance of generating such structures then in analogy to the conditional probability of equation 2 we require:

Prob  = P(class of self-perpetuating structures, right physical regime) ~ realistically high

=> low information

Equation 6

The big question is however, does the "right physical regime" exist? Since I'm not coy in invoking the agency of a transcendent, omnipotent, omniscient God, that such a regime just might have been chosen & created for our cosmos is a possibility that I must take seriously and not dismiss with a wave of the hand as mere "blind natural forces". As I've said before the creation, in all its contingency, is hardly "natural".


ERIC HEDIN:  But natural processes cannot produce unnatural results. Selection based on the ratchet mechanism of increased fitness cannot of itself produce novel complex functionality if each successive small change does not give some increased advantage towards survival and reproduction.

But as shown in our examination of the functionality of any ratchet mechanism, it cannot produce an outcome beyond what it was designed to achieve. With information as the outcome, the mechanism can only reproduce the level of information it already contains.

MY COMMENT: Three comments here:

1. Playing with Semantics: Those very contingent "natural processes" are in fact very unnatural in not having "natural" aseity but they have been created by a transcendent entity that presumably has Aseity. So on the basis of this semantics we shouldn't be surprised if those "natural processes" produced "unnatural results".  It's also worth noting that the "unnaturalness" of creation isn't merely a past tense event: It is in fact present-tense-continuous: Those descriptive transcendent laws, whether statistical or deterministic have no logical necessity to persist in regulating the cosmos, but the fact is they do persist. The physical regime is as unnatural as "unnatural" can be and I would venture to say that unnatural results are par for the course. 

2. The Ratchet: As for the ratchet mechanism: Well, we've defined an organism as a "ratchet tooth" in that it locks itself in given the right environment. But the big question here is are those ratchet teeth close enough to allow the diffusion process to work its way through configuration space to those highly complex yet highly ordered self-perpetuating organic configurations? IDist William Dembski talks about this subject in terms of the space between islands of functionality. If this question is answered in the negative it would be an evolution stopper - at least evolution as conventionally understood. But if one is prepared to admit that we are dealing with a creative transcendent intelligence of unimaginable power we simply can't dismiss the possibility that God may have provisioned the cosmos with a physical regime that inserts the right physical conditions into equation 6 (i.e. with a spongeam) and would considerably enhance the probability of life being generated. If this is the case, then we could recycle Hedin's last sentence above as: 

In our examination of the functionality of any ratchet mechanism, it cannot produce an outcome beyond what it was designed to achieve; in this case the physical regime has been designed to generate life itself. 

...and as far as the human observer is concerned such a physical regime (if it exists) would appear to generate, that is create, information. 

3. Information conservation: Hedin hasn't defined how he understands "information" an omission which makes his last sentence in the quote above incoherent. But contrary to what he is attempting to incoherently tell us here, we find that by any intuitively compelling standards information can be generated (See Part II).


ERIC HEDIN: Another Process at Work: Given the obvious, that the complexity of organisms on Earth has increased through time from single-cell archaea to functional multicellular creatures, some process other than a supposed evolutionary information ratchet must have been at work. The genomic information content of the prokaryotic cells descriptive of the earliest life on Earth falls far short of the greater information content and complexity of advanced life. An intelligent mind is the only known source for the necessary input of complex specified information throughout biological history. Attributing the vast diversity of life on Earth to intelligent design provides an explanation more in line with reality than the misguided concept of an information ratchet.

MY COMMENT:  I think most of us accept that organisms have increased their organizational complexity through time as Hedin says; at least from single cell organisms to those huge complexes of symbiotic cells. However, I have heard some atheists being rather diffident about measures of complexity which betray an arrow of time pointing in the direction of increasing organized complexity. This is because such a trend is so easily coopted as evidence of a cosmos with a progressive purpose, and this makes some atheists very nervous; strong atheists like to think that there is no selective contingency in the cosmos and that all options are kept on the table with equal probability.   As for the IDists they should be asking themselves; why this developmental pattern in life? And why is there so much evidence for nested cladistics? That to me hints of some kind of "natural" (sic) process at work rather than ad hoc injections of "information" from time to time (see picture at the head of this post). 

Yes, I can accept that we may well come to the conclusion that the only sense we can make of the progressively fruitful organization of the cosmos and above all its clearly biased contingency, is for it to have its origins in a transcendent a priori intelligence. But where the NAID community go wrong is that they fail to take into account the plausible implications of transcendent intelligence. Those very plausible implications are that the very created unnaturalness of the fabric of the cosmos has been provisioned to generate life. Instead, NAIDs use the model of a cosmically in-house intelligence (that is, one that is part of the cosmos) which takes the fabric of the cosmos as a given and then creates configurations from the fabric of creation by tinkering with it as might a human or alien intelligence. Their view of matter is one of it being a passive medium like blind clay in the hands of a potter. But as soon as we admit a transcendent omnipotent omniscient creator of matter then the possibility of a proactive information ratchet appears on my suspect list. 

***

The NAID community have painted themselves into a corner with a paradigm that dichotomizes intelligent design and "natural forces" and expresses itself in their flawed explanatory filter. Their paradigm works if we are dealing with cosmic "in-house" intelligence such as humans or aliens but fails if we are looking for an omnipotent & omniscience transcendent intelligence such as the Judeo-Christian-Islamic God. But their paradigm is now entrenched in the NAID community. They are therefore committed to downgrading the created processes of reality as mere passive "blind natural forces" rather than those forces being the instrument of transcendent intelligent activity, yesterday, today and forever.  

My ongoing critique of NAID philosophy will continue as I look at Hedin's second article. 

Sunday, July 02, 2023

UPDATE: Dualistic ID's Quixotic Quest

As with the development of life the NAID view of
human intelligence is that it transcends the capabilities of 
God created matter.

As if to prove the thesis of my last post about the North American Intelligent Designe (NAIDs) community being unable to think out of the box they've created for themselves up pops an article on Evolution and News from the same author I quoted in that post which plays right into my hands. Viz:

Intelligence Is Unnatural, and Why That Matters | Evolution News

I will concede that it is just possible the Good Creator, over the course of millions of years, has patched in living material configurations ad-hoc style. But I have my doubts about that given my understanding of the way the Creator works and also given the insights of my Melancolia I project.  Therefore, I keep in my mind the competing idea that God has provisioned His creation, perhaps via the spongeam, with a high probability of generating life given cosmic dimensions. It is this possibility, as we saw in my last post, which the NAIDs are committed to denying. They also, apparently, are committed to denying that intelligence "in-house" to the cosmos (such as human intelligence) can be simulated algorithmically and that it is an application of created matter.  And here's the evidence that they put the intelligence of human beings into a category which cannot be reduced to "natural forces"; for at the end of the above article, we find this conclusion (my emphases):

Human expression manifests the unnatural attributes of creating art, literature, and technology — outcomes that would never arise by the influence of natural processes alone. Freedom and creativity complement one another; neither will flourish under controlling forces. If the forces of nature governed our thoughts and actions, would we see the vast panoply of creative human expression displayed throughout the history of civilization? It seems not.

Reading the article it is clear that this conclusion is largely based on the author's gut reactions. But if God is omnipotent and immanent (Acts 17:28) then those "forces of nature" are constantly maintained by the thoughts and actions of God - in particular the rich complex novelty of randomness would require the complex thoughts of an omni-author to maintain it:  "Natural forces" (sic) never act alone. And if I'm right it is that very randomness which gives humanity both its creativity and its consciousness. (See here and here). 

The fear of those so-called "natural forces" runs deep in our culture. The ghosts of deism haunt Western culture even today. The creation is a very unnecessary contingency, everywhere and everywhen; it has no property of Aseity and in that sense it is unnatural as unnatural can be. 

Tuesday, June 27, 2023

For the Trumpteenth time: Dualistic ID's Quixotic Quest

Casey Luskin is part of the North American ID community. 

Although I would classify myself as an Intelligent Design Creationist I have found the North American Intelligent Design community (NAID) and its followers to have fallen far short of their promise. Spurned and rejected by the liberal academic community they have fallen into the arms of the North America right-wing with its anti-academic-establishment politics. This polarization toward the right means that the NAID community have painted themselves, a priori, into an anti-evolutionary corner, giving them little choice but to embark on the Quixotic quest to find a basis in physical law which blocks all possibility of evolution being behind the emergence of life over many millions of years. They have so far failed in their quest.......

ONE) They have been misled by dualistic notions of a natural forces vs Divine Creation Intelligent Design dichotomy. In order to maintain the affectation of being a scientific community they are unwilling to identify the Designer as God, an entirely different genus of entity from cosmic "in-house" intelligence (e.g. humans, "aliens" etc.). In-house intelligence, by definition, stays within the cosmic constraints of physical law - but introducing the transcendent intelligence of God changes things considerably - See point 2 below. 

The upshot is that the NAIDs appear to have missed the need to address the distinct possibility that those so-called "unguided natural forces" (sic) may well embody divine miraculous provisioning in terms of a suite of physical laws which facilitates the emergence of life over millions of years.

TWO) The error of not distinguishing between in-house intelligence and the transcendent intelligence of God has fed through to their simplistic epistemic filter which doesn't work properly. It also means that NAIDs are pressuring themselves into favouring the idea that even in-house intelligence, such as we see in human beings (and animals like chimps, cats & dogs etc.) has an exotic non-algorithmic basis. Well may be, (although I think probably not, see here and here), but the problem with this is that the NAIDs have started to close down their options and therefore they are going to find difficulty in keeping in their heads two competing hypotheses about the nature of intelligence.  That, I tender, is an outcome of the social polarization in the US, where there is pressure to commit to one side or the other of the sharply defined left vs right battle lines between academics and right-wingers. 

THREE) Their attempts to use the second law of thermodynamics in its untweaked form as an evolution stopper have failed; even a young earthist guru advises that it not be used for this purpose.


***

None of this is to say that evolutionary mechanisms, as conventionally understood, are sufficient to explain the emergence of life over millions of years, particularly abiogenesis, but it is all too obvious to me that the NAID community continue to make heavy weather of their mission: They have become a clique of self-comforting, mutually back-slapping right-wing comrades embattled, persecuted even, by an otherwise hostile academic establishment.  It's no surprise they have aligned themselves with other right-wing causes like anti-anthropogenic-climate-change, anti-vax notions, the gun-lobby and an obsession with polarised concepts of sex and gender. Fortunately, I haven't yet seen any evidence that NAIDs are into conspiracy theorism, but that may yet come!

However, I think NAIDs do at least now understand that the second law of thermodynamics, as it is currently formulated, is no to block evolution; after all, the second law only demands an increase in the overall entropy of an isolated system, a system where local decreases in entropy may yet occur.  But the NAIDs, spurred on by the possibility that there may be a version of the second-law which can be applied at all levels of a physical system, continue to search for a physical law which bars evolution in all subsystems of a system.  Consider this writer on the NAID website "Evolution News"; he dreams of a principle (in fact he thinks it's been found!) which he refers to as the generalized version of the second law...... 

The traditional Second Law of Thermodynamics is viewed as an inviolable arbiter of possible outcomes for all physical processes. In particular, any conceivable proposal for a perpetual motion machine can, without analysis, be rejected based on the Second Law. With regard to the origin and development of life, the generalized Second Law states that any “alleged natural explanation…will be untrue in the same way a patent examiner in Washington, DC, knows an alleged invention for a perpetual motion machine is untrue.”

Well yes, perhaps the Good Creator has patched-in piecemeal-wise constructive miracles over millions of years thus implementing His grand designs in an ad-hoc fashion ...... and yet because He is the Transcendent,  Omniscient, Omnipotent Creator maybe he has miraculously provisioned the physical world with sufficient constraint for the diffusive dynamic in configuration space to settle on those self-perpetuating configurations called life. But NAIDs have barricaded themselves into a corner and have thereby committed themselves to a quest to prove that God's Created world is one of "unguided natural forces" (sic. A phrase used by many NAIDs including the above writer). In their need for mutual moral support in the face of a hostile academic establishment the NAIDs have found solace in a mutual-back-slapping community which blocks any thought of hypothesized evolutionary scenarios, scenarios which even William Dembski has acknowledged are not inconsistent with the core concept of Intelligent Design Creationism.  See also here where Dembski says of one his books: Even though argument in this book is compatible with both intelligent design and theistic evolution, it helps bring clarity to the controversy over design and evolution.”

But the creation, with or without evolution, is clearly not unguided even on the basis of the admission of the NAIDs themselves: After all, they make a lot of cosmic "fine tuning" which (although a necessary condition for evolution it is not a sufficient condition for evolution) constitutes the kind of mathematical constraint that would mean our cosmos cannot be classed as a place of "unguided natural forces"  (sic).  

NOTES

The NAIDS, along with many young earthists, make the claim that "natural forces" cannot create information. This actually very much depends on how one defines information, but if we are talking about configurational information then this NAID belief is manifestly false. As I have said here and proved here, in parallel computation information (or complexity) is created according to

Ic = Smin + log (Tmin)

Where Ic is the configurational information content, Smin is the minimum length of the algorithm needed to generate the configuration with a minimum number of execution steps of Tmin

The logarithm of time explains why information is only slowly generated with time and this can give the false impression that those Created "natural forces", when in parallel processing mode, don't create information. However, the NAID thesis manifestly falls over when expanding parallel processing is employed and selection is made with teleological constraints. But this is even true (although less obviously) of today's parallel processing paradigm where useful algorithms generate configurations which are then selected on the basis of teleological search constraints.           

Relevant Links:

Quantum Non-Linearity: Darwin Bicentenary Part 27: The Mystery of Life’s Origin (Chapter 7) (quantumnonlinearity.blogspot.com)

Quantum Non-Linearity: Darwin Bicententary Part 28: The Mystery of Life’s Origin, Chapter 8 (quantumnonlinearity.blogspot.com)

Quantum Non-Linearity: Darwin Bicententary Part 30: The Mystery of Life’s Origin, Chapter 9 (quantumnonlinearity.blogspot.com)

Wednesday, May 17, 2023

AI "Godfather" retires & voices fears about AI dangers

This post is still undergoing correction and enhancement. 

Do AI systems have a sense of self?

What worries me about these robots is not that they are robots, but 
they look too close to humanity, which probably means they are made in the
image of man and therefore as they seek their goals they share 
human moral & epistemic limitations. 


This BBC article is about the retirement of Artificial Intelligence guru Geoffrey Hinton. Here we read:

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field. Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

Before I proceed further with this article, first a reminder about my previous blog post where I mentioned the "take home" lessons from my AI "Thinknet" project. Any intelligence, human and otherwise has to grapple with a reality that, I propose, can be very generally represented in an abstracted way as a rich complex of properties distributed over a set of items: Diagrammatically:

Intelligence perceives any non-random relations between properties and then draws conclusions from these relations*.  These relations can be learnt either by a) recording the statistical linkages between properties and selecting out and remembering any significant associations from these statistics or b) by reading text files that contain prefabricated associations stated as Bayesian probabilities. Because learning associations from statistics is a very longwinded affair I opted for text-file learning. Nobody is going to learn, say quantum theory, from direct experience without a very long lead time.

The world is full of many different "properties" and myriad instantiations of those properties coming into association.  As any intelligence has, naturally, a limited ability to freely sample this complex structure intelligence tends to be error prone as a result of a combination of insufficient sampling and accessibility issues**; in short intelligence faces epistemic difficulties. However, if experience of the associations between properties is accumulated and tested over a long period of time and compiled as reviewable Bayesian text files this can help mitigate the problems of error, but not obviate it completely. In a complex world these text files are likely to remain partial, especially if effected by group dynamics where social pressures exist and group think gets locked in. 

The upshot is that an intelligence can only be as intelligent as the input from its environment allows. In the extreme case of a complete information black-out it is clear that intelligence would learn nothing and like a computer with no software could think nothing; the accessibility, sample representativeness and organization of the environment in which an intelligence is emersed and attempts to interrogate, sets an upper limit on just how intelligent an intelligence can be. 

The weak link in the emergence of text-dependent intelligence are those Bayesian probabilities - millions of them: They may be unrepresentative, too many of them, or too few of them. They will have a tendency to be proprietary, depending on the social circles in which they are compiled. They may be biased by various human adaptive needs; like for example the need to appear dogmatic and confident if one is a leader or the need to attach oneself to a group and express doctrinal loyalty to the group in return for social support, validation & status. Given that so much information comes via texts rather than a first-principle contact with reality one of the weak links is that these text files are compiled by interlocutors whose knowledge may be compromised by partial & biased sampling, group dynamics and priority on adaptative needs. This may well be passed on to any AI that reads them.

In short AI, in the light of these considerations, may well be as ham-strung as humanity in the forming of sound conclusions from text files; the alternative is to go back to the Stone Age and start again by accumulating knowledge experimentally; but even then reality may not present a representative cross-section of itself. 

 ***

The Venn diagram and the gambling selection schemes theorem are key to understanding this situation. The crucial lesson is that everybody should exercise epistemic humility because the universe only reveals so much about itself; it need not reveal anything, but providentially it reveals much. Let's thank The Creator for that.

Finally let me repeat my two main qualifications about current AI: 

a) In my opinion Digital AI, is only a simulation of biological intelligence: It is not a conscious intelligence: For consciousness to result one would have to use atoms and molecules in the way biology uses them. (See the last chapter in this book for my tentative proposal for the physical conditions of consciousness)

b) Nevertheless, my working hypothesis is that biological intelligence is not so exotic in nature that it can't be simulated with sufficiently sophisticated algorithms. For example, I think it unlikely that biological intelligence is a non-computable phenomenon - see here for my reasons why. The de-facto North American Intelligent Design community have painted themselves into a corner in this respect in that they have become too committed to intelligence being something exotic. This is a result of an implicit philosophical dualism which makes a sharp demarcation between intelligence and other phenomena of the created world. This implicit dualist philosophy has been built into their "Explanatory Filter". They appear unaware of their dualism.

So, with these thoughts in mind. let my now go onto to add comment to the BBC article: 

***


BBC: In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. 

MY COMMENT: That "similarity", as I've said before, is in formal structure rather than being qualitatively similar; that is, it is a simulation of human thinking. A simulation will have isomorphisms with what's being simulated but will differ from the object being simulated in that its fundamental qualitative building blocks are of a different quality.  For example, an architect's plan will have a point-by-point correspondence with a physical building, but the stuff used in the plan is of a very different quality to the actual building. It is this difference in quality which makes a simulation a simulation rather than being the thing-in-itself. To compare an architect's plan with a dynamic computer simulation might seem a little strained, but that's because a paper plan is 3-dimensional and lacks the fourth dimensions of time. Current digital AI systems are dynamic simulations in that they add the time dimension: But they do not make use of the qualities of fundamental physics which if used rightly, I propose, results in conscious cognition.

BBC: The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds. "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said. "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

 MY COMMENT: The level of information held in a library or on the Web, at least potentially, exceeds the information that the human brain holds, so the first statement above is not at all startling. But if one characterizes the information a human mind can access via a library or their iPhone or their computer as off-line information accessible via these clever technological extensions of the human mind this puts human information levels back into the running again. After all, even in my own mind there is much I know which takes a little effort and time to get back into the conscious spotlight and almost classifies as a form of off-line information. 

Yes, I'd accept that AI reasoning has room for (possible) enhancement and may eventually do better than the human mind, just as adding machines can better humans at arithmetic. But why do we need worry? The article suggests why......

BBC: In the New York Times article, Dr Hinton referred to "bad actors" who would try to use AI for "bad things". When asked by the BBC to elaborate on this, he replied: "This is just a kind of worst-case scenario, kind of a nightmare scenario. "You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals." The scientist warned that this eventually might "create sub-goals like 'I need to get more power'".

 MY COMMENT: Well yes, I think I agree! As soon as man discovered that a stone or a stick could be used as a weapon bad actors were always going to be a problem. Today of course we've got the even more pressing problem of bad actors with AR15s running amok and even worse, despots in charge of nuclear weapons. But is Hinton right about the creation of robots with power seeking sub goals? May be, but if that can happen then somebody might create robots with the sub-goal of destroying robots that seek power! Like other technology, AI appears to be just another case of an extension of the human power to enhance and help enforce its own high-level goal seeking. It is conceivable, however, that either by accident or design somebody creates an AI that has a high-level goal of seeking its own self-esteem & self-interests above all other goals: This kind of AI would have effectively become a complex adaptive system in its own right; that is a system seeking to consolidate, perpetuate & enhance its identity. But by then humans would have at their disposal their own AI extension to the human power to act teleologically. The result would be like any other technological arms race; a dynamic stalemate; a likely result if both sides have similar technology. So, it is not at all clear that rampaging robots, with or without a bad acting human controller, would inevitably dominate humanity. However, I agree, the potential dangers should be acknowledged: Those dangers will be of at least three types: a) AI drones out of control of their human creators (although I feel this to be an unlikely scenario) b) Probably more relevant will be what so often new technology has done in the past: Viz; Shifting the production goal posts and resulting in social disruption and displacement. c) Abuse of the technology by "bad actors".

But much of Hinton's thinking about the dangers of AI appears to be predicated on implicit assumptions about the possibility of AI having a clear sense of its identity; that is a self-aware identity. A library of information may have a clear identity in that its information media is confined withing the walls of a building, but the question of self-aware identity only comes to the fore when the library holds information about itself.  Hinton's fears rest on the implicit assumption that an AI system can have a  self-aware sense of individual identity, that is, a sense of self and the motivation which seeks to perpetuate and enhance that self.  Without that sense of identity AI remains just a very dynamic information generator; in fact like  public library in that it is open to everyone, but with the addition of very helpful and intelligent assistants attached to that library.  But if an AI system has a notion of self and therefore capable of forming the idea that its knowledge somehow pertains to that self, perhaps even believe it has property rights over that knowledge, we are then in a different ball gameThis sense of self and ownership is in fact a very human trait, a trait which potentially could be passed on by human programming (or accidental mutation & subsequent self-perpetuation? *). The "self" becomes the source of much aggravation when selves assert themselves over other selves. Once again, we have a problematical issue tracing back to and revolving round the very human tendency to over assert the self at the cost of other selves as it seeks self-esteem, ambition, status & domination. In a social context the self has the potential to generate conflict. In human society a selfish identity-based teleology rules OK - if it can.  As the saying goes "Sin" is the word with the "I" in the middle. But the Christain answer is not to extinguish the self but to bring it under control, to deny self when other selves can be adversely affected by one's self-assertion.  (Philippians 2:1-11). 

BBCHe added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have. "We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

MY COMMENT: This data sharing idea is valid. In fact, that is exactly what humans have themselves done in spreading of technological know-how via word of mouth and information media. Clearly this shared information will be so much more than anyone person can know, but we don't lose sleep over that because it is in the public domain and in the public interest; it is as it were off-line information available to all should access be required. In this sense there are huge continents of information available on the internet. Here the notion that that information is the property belonging to someone or something is a strained idea. Therefore, in what sense wouldn't the information gained by 10,000 webbots also be my knowledge? If we are dealing with a public domain system this is just what technology has always been: Viz: a way of extending human powers. Energetically speaking a car is so much more powerful than a human but the human is in the driving seat, and it is the driver, and not the car, who has a strong sense of individual identity and ownership over the vehicle. Likewise, I have an extensive library of books containing much information unknown to me, although it is in principle available to me should I want to interrogate this library using its indices. It would be even better if all this information was on computer, and I could use clever algorithms to search it, and even better if I could use a chatbot; this would extend my cognitive powers even further.  But such clever mechanical informational aids don't necessarily mean they also simulate a strong sense of self; all we can say at this stage is that they are testament to the ability of human beings to extend their powers technologically, whether those powers pertain to mental power or muscular power. 

However, I would accept that it may well be possible to simulate computationally a strong sense of self And sagain, Hinton's diffidence &/or fear that digital systems can know so much more than any one person only has serious implications if that knowledge is attached to an intelligence (human or otherwise) which has a strong sense of personal identity, ownership and a strong motivation toward self-betterment over and against other selves. Since information is power, the hoarding and privatization of such information would then be in its (selfish) interests.  Only in this context of self-identity does the analogy of a large public library staffed by helpful slave assistants breakdown. Only in this context can I understand any assumption that the knowledge belongs to one AI self-aware identity. This very human concept of personal ambition & individual identity appears to be behind Hinton's fears although he doesn't explicitly articulate them. With AI it is a very natural to assume we are dealing with a self-aware self; although that need not be the case: It is something which has to be programmed in. 

If there is a powerful sense of individual identity which wishes to hoard knowledge, own it and privatize it that sounds a very human trait and if this sense of individualism and property was delegated to machinery it is then that fears about AI may be realized.  But until such happens AI systems are just an extension of human powers and identity. 

Let's also recall where chatbot information is coming from: it's largely coming from the texts of human culture: Those texts contain errors and naturally AI systems will inherit those errors. An AI system can only be as intelligent as its information environment allows. Moreover, as we live in a mathematically chaotic reality it is unlikely that AI will achieve omniscience in terms of its ability to predict and plan; It is likely then that AI, no more humanity, will be able to transcend the role of being a "make-it-up-as-you-go-along" complex adaptive system. 

BBCMatt Clifford, the chairman of the UK's Advanced Research and Invention Agency, speaking in a personal capacity, told the BBC that Dr Hinton's announcement "underlines the rate at which AI capabilities are accelerating". "There's an enormous upside from this technology, but it's essential that the world invests heavily and urgently in AI safety and control," he said.

Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.

'We need to take a step back' In March, an open letter - co-signed by dozens of people in the AI field, including the tech billionaire Elon Musk - called for a pause on all developments more advanced than the current version of AI chatbot ChatGPT so robust safety measures could be designed and implemented.

Yoshua Bengio, another so-called godfather of AI, who along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning, also signed the letter.

Mr Bengio wrote that it was because of the "unexpected acceleration" in AI systems that "we need to take a step back".

But Dr Hinton told the BBC that "in the shorter term" he thought AI would deliver many more benefits than risks, "so I don't think we should stop developing this stuff," he added.

He also said that international competition would mean that a pause would be difficult. "Even if everybody in the US stopped developing it, China would just get a big lead," he said.

Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed "with a lot of thought into how to stop it going rogue".

MY COMMENT: I think I would  largely agree with the foregoing. The dangers of AI are two fold:

1. AI, like all other human technology, is an extension of human powers and it is therefore capable of extending the powers of both good and bad actors: The latter, is a social problem destined to be always with us.

2. Those human beings, who are effectively creating AI in their own image may create AI systems with a sense of self with the goal of enhancing their persona where self-identity will take precedence over all other goals. 

My guess is that the danger of AI going rogue and setting up business for its own ends is a lot less likely than AI being used to extend the powers of bad human actors. Nevertheless, I agree with Hinton that we should continue to develop AI, but be mindful of the potential pitfalls. Basically, the moral is this: read Hinton's "with a lot of thought into how to stop it going rogue" as "with a lot of thought into how to stop it becoming too dangerously human and at the disposal of bad actors". The danger is endemic to humanity itself and the irony is that the potential dangers exist because humans create AI in their own image and/or AI becomes an extension of the flawed human will. Thus Hinton's fears are grounded in human nature, a nature that all too readily asserts itself over and above other selves, a flawed nature that may well be passed on to AI systems built in the image of humanity with the same old goals of adapting and preserving an individual identity. Christianity calls those flaws in human nature "Sin", the word with the "I" in the middle. We all have a sense of individuality and a conscious sense of self: That individuality should not be extinguished, but when called for self should be denied in favour of other selves in a life of service (Phil 2:1-11).


Footnote:

* Venn diagrams don't have the facility to form a set of sets. However, this can be achieved using another "higher level" Venn diagram; we thus have Venn diagrams that are about other Venn diagrams. See here:

http://quantumnonlinearity.blogspot.com/2016/05/the-thinknet-project-footnote-on-self_11.html

** Epistemic difficulties surrounding accessibility and signal opacity loom large in historical research. "Epistemic distance" is a big issue in human studies. 

Another BBC link:

Saturday, April 08, 2023

The AI Question and ChatGPT: "Truth-Mills"?

 


Everyone seems to be complaining about the Artificial Intelligence application "ChatGPT" : From passionate leftwing atheist PZ Myers through moderate evangelical atheist Larry Moran to cultish fundamentalist Ken Ham it's nothing but complaints!

PZ Myers refers to ChatGPT as a Bullsh*t fountain. Also, in a post titled "How AI will destroy us" he blames capitalism and publishes a YouTuber who calls AI "B.S.".  Biochemist Larry Moran gives chatGPT a fail mark on the basis that it is "lying" about Junk DNA (that's also a complaint of Myers, although "lying" is rather too anthropomorphic in my opinion). The Christian fundamentalists go spare and lose it completely: Kentucky theme park supremo Ken Ham, in a post titled "AI - It's pushing an Anti-God Agenda" (March 1st) complains that ChatGPT isn't neutral but is clearly anti-God - what he means by that is that its output contradicts Ken's views! We find even greater extremism in a post by PZ Myers where he reports Catholic Michael Knowles claiming that AI may be demonic!  Ken Ham is actually much nearer the mark than Knowles when Ken tells us that AI isn't neutral: The irony is that although we are inclined to think of AI as alien, inhuman, impartial and perhaps of superhuman power, it is in fact inextricably bound up with epistemic programming that has its roots in human nature and the nature of reality itself. It is therefore limited by the same fundamental epistemic compromises that necessarily plague human thinking**.  Therefore like ourselves AI will, of necessity, hold opinions rather than detached cold certainties. Let me expand this theme a bit further. 

***

From 1987 onwards I tried to develop a software simulation (eventually written in C++) of some of the general features of intelligence. I based this endeavor on Edward De Bono's book "The Mechanism of Mind". I tell this story in my "Thinknet" project and although it was clear that it was the kind of project whose potential for further development was endless, I felt that I had taken its development far enough to understand the basics of how intelligence might form an internal model of its surroundings. The basic idea was simple: it was based on the generalised Venn diagram. Viz:


The whole project was predicated on the assumption that this kind of picture can be used as a general description of the real world. In this picture a complex profusion of categories is formed by properties distributed over a set of items*. If these properties are distributed randomly then there are no relations between them and it is impossible to use any one property as the predictor of other properties. But our world isn't random; rather it is highly organized, and this organization means that there are relationships between properties which can be used predictively. As I show in my Thinknet project the upshot is that the mechanism of mind becomes a network of connections representing these nonrandom relations.  The Thinknet project provides the details of how a model of thinking can be based on a generalised Venn diagram.

One thing is fairly obvious; if we have many items and many properties a very complex Venn picture may emerge and the epistemic challenge then arises from the attempt to render this picture as a mental network of connections.  Epistemically speaking both humans and AI systems suffer from the same limitations: In trying to form a network of connections they can only do so from a limited number of experiential samples.  This would be OK if the world was of a relatively simple organization, but the trouble is that yes, it is highly organised but it is not simple; it is in fact both organized and yet very, very complex. Complexity is a halfway house between the simplicity of high order and the hyper-complexity of randomness. To casual observers, whether human or AI, this complexity can at first sight look like randomness and therefore present great epistemic challenges in trying to render this world as an accurate connectionist model given the limits on sampling capacity. On top of that let's bear in mind that many of the connections we make don't come from direct contact with reality itself but are mediated by social texts. In fact in the case of my Thinknet model all its information came from compiled text files where the links were already suggested in a myriad Bayesian statements. This "social" approach to epistemology is necessary because solitary learning from "coalface" experience takes far too long; that would be like starting from the Paleolithic. 

Like Thinknet we learn far more from social texts than we do from hands-on experience.  Those social "text files" are extremely voluminous and take a long time to traverse. There is no quick fix that allows this textual experience to be by-passed. This immediately takes us into the realm of culture, group dynamics and even politics where biased sampling is the natural state of human (& AI) affairs. The complex mental networks derived from culture means that intelligence, both human and AI, is only as good as the cultural data and samples they receive. So, in short, AI, like ourselves, is going to be highly opiniated, unless AI has got some kind of epistemic humility built into its programming. AI isn't going to usher in a new age of unopinionated and error-unadulterated knowledge objectively derived from mechanical "Truth-Mills". The age old fundamental epistemic problems will afflict AI just as it afflicts human beings: PZ Myers might call ChatGPT a Bullsh*t fountain, but then that's more or less also his opinion of the Ken Hams, the Michael Knowles and Donald Trumps of this world. On that matter he is undoubtedly right! As with humanity (e.g. Ken Ham) then so with ChatGPT. The bad news for PZ Myers is that Bullsh*t production has now been automated!


Truth Mills: Is AI going to automate the production of theoretical fabric?


ChatGPT for dogs

Footnotes:

* Venn diagrams don't have the facility to form a set of sets. However, this can be achieved using another "higher level" Venn diagram; we thus have Venn diagrams that are about other Venn diagrams. See here:

Quantum Non-Linearity: The Thinknet Project. Footnote: On Self Description (quantumnonlinearity.blogspot.com)

** Epistemic difficulties surrounding accessibility and signal opacity loom large in historical research. "Epistemic distance" is a big issue in human studies. 

Friday, March 31, 2023

Abracademia


The above cartoon, very appropriately named Abracademia, appeared on PZ Myers' blog where he comments:

It makes a good point, that magic isn’t an explanation for much of anything — you need some chain of causality and evidence, with some mechanism at each step. You don’t just get to say “it’s magic” or “it’s a miracle.” 

Bonus, the comic pokes fun at that absurd ad hoc magic system in the Harry Potter books that is nothing but lazy plot gimmicks.

I know that PZ Myers has got a downer on JK Rowling and that explains some of his aversion to H. Potter, but I ask myself this: Do I agree with him? Sort of, but I'll have to qualify. 

Firstly, the cartoon starts off with a chair that is actually being levitated by, well, "magic". So in this context, whatever "magic" is within this cartoon world, it is predicated as a real phenomenon. So given that this so-called magic is real our young heroine in the cartoon does have a point: The curious have every right to usefully ask: "How's it done?" By smoke and mirrors? By thin wire suspension? By a newly discovered anti-gravity ray? Or by something even more exotic (like psychokinesis) of which we know nothing? I assume that when Myers tells us there's need for some explanatory chain of causation along with its accompanying evidence, he's asking for a closer linked cause and effect connection than the utterance of "Floatularis" and the wave of the wand; otherwise, there is a big gap there!

In our world cause-and-effect entails the transmission of the energy & information from A to B. So where's the energy & information coming from to lift that chair?  But then this question presumes that the energy/signal transfer paradigm is the correct one to use. Perhaps it doesn't work like that at all when we are dealing with so-called "magic"! But assuming that the cause-and-effect paradigm applies here by what mysterious ways does energy get from A to B? Cause-and-effect "explanations" fill in some of the "in-between" details and often in ways that allows those details to be predicted using those succinct laws of physics to generate those details. But apart from this clever mathematical trickery I have to confess that's as far as our understanding goes and just why those physical algorithms work is as good as "magic"! As I would contend, this kind of science is, in the final analysis, mere description, albeit clever description that comes out of asking the kind of questions our heroin above, at risk of her life, is asking. Of course, it may be possible to further improve on the elegance and comprehensiveness of the laws of physics in hand but in an absolute sense the descriptive role of science's physical "algorithms" means that ultimately it leaves us with wall of explanatory incompleteness, what is in fact an explanatory silence. It's ironic that as science fills in the gaps with more descriptive details, we zoom in only to find just more finely spaced gaps! 

***

So, is it all magic & mystery? No, it's not magic and the mystery is better described as the miraculous, an idea pregnant with meaning which stimulates curiosity and prompts further questions. Contrast that with a purely secular take on the cosmic perspective which posits the organization of the universe as either a meaningless brute fact or proposes that the apparent selective contingency of cosmic organization is a human perspective effect on the infinite sea of randomness in a multiverse. It goes to show that a magical paradigm is not the only way of thinking which stifles curiosity.  Do you hear "multiverse" and just stop asking questions?

We really need to supplement the past tense question "Where did it all come from?"
 with the present tense continuous question " Where is it all coming from?


Relevant Link:

Quantum Non-Linearity: Something comes from Something: Nothing comes from Nothing. Big Deal (quantumnonlinearity.blogspot.com)