Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, November 18, 2025

The AI Garbage Bubble?



I was interested to see that conceited blowhard Richard Carrier has commented very dogmatically & abrasively on the subject of AI. See here:

AI Is Garbage and a Bubble (Please Learn This) • Richard Carrier Blogs

Hahaha Richard, I like it! I've only skimmed over his article but my guess is that he's probably got some worthy points there. We have to factor in, however, that he has a tendency to blow hard on stuff he doesn't like. That is very much a Richard Carrier trait. 

It is likely that the biggest problem with AI is that it's over hyped. At least part of reason for this, I guess, is a result of chatbot behavior which gives every impression of a talking, walking sentience; humans have a reactive tendency to think that if it talks and walks like a duck then its a duck. Responsive talking, in particular, is very convincing of the presence of sentience. But that's a bit like a person from a primitive culture, unaware of our hi-tech times, looking in our very perfect mirrors or listening to a perfect recording and then inferring as a first-off conclusion that he's actually seeing or hearing a real human there and then; it's a very natural and understandable knee-jerk reaction to conclude that such are evidence of the immediate presence of a sentient being. 

But chatbots are only another (albeit very sophisticated) human-computer interface. It is chatbot's very human like language behavior which is fooling a lot of people and if the investment bubble bursts we could be in for some trouble. But then are things as bad as Richard says? He's well known for being a blowhard and he may well be blowing just a little too hard (again). 

The quasi human interface which chatbots provide is impressive and can give the impression we are talking to an entity which is super-intelligent. But then using SQL (Structured Query Language) to interrogate a big computer database can also be very impressive.  I regard AI language models as a step (perhaps quite a few steps in fact) beyond SQL. I personally would want to congratulate the AI research community on providing us with a natural language interface as a way of accessing knowledge and information. Thanks and Well done; you deserve an accolade of two. I have a measure of appreciation about how AI works after my involvement on The Thinknet Project. That involvement started in the 1980s (but later morphed into a project in Quantum Mechanics). 

But in creating AI based on the human thinking* model it is a fairly sound inference that it is therefore going to be very, very fallible; after all, humans are very epistemically fallible; in fact ask Richard himself just what he thinks of the fallible conclusions of many of his fellow humans on whom he has been known to blow very hard. His criticism of AI is on a par with some of his criticism of his fellow humans; like humans, like AI. However, although Richard's post is probably overstated it is worth a read especially by some of those rich CEOs who don't understand what they are dealing with. It might sober  them up a bit on the subject.


Footnotes

* As I proceeded with the Thinknet model I perceived that an actual "thinking machine" was not very far away. After all I based my model on Edward De Bono's "Mechanism of Mind"

Tuesday, September 19, 2023

NAID pundit William Dembski on AI


Is AI taking off!

I notice that North American Intelligent Design guru William Dembski has been recently writing about Artificial Intelligence on his blog - see these links:

ChatGPT Is Becoming Increasingly Impressive in Real Time – Bill Dembski

 Inferring the Best Explanation via Artificial Intelligence – Bill Dembski

ChatGPT on Intelligent Design and the Origin of Life – Bill Dembski

One of these articles also appeared on the NAID website "Evolution News"; see here.  From a quick perusal of his articles, it seems that Dembski is impressed by ChatGPT and now has a high view of AI possibilities. Like myself he would probably maintain that AI is only a simulation and has no consciousness*, but I'd be interested to know if like myself he would be prepared to accept that human intelligence can be simulated algorithmically. If so that would set him apart from NAID guru Eric Hedin who appears to poo poo any thought that AI can be anything other than a glorified "printer" that churns out and at best rearranges information that has been generated by the "real" intelligence of human beings. 

I think I'll be having a closer look at Dembski's posts.


* To generate consciousness we would have to use matter in the way the biological brain uses matter - see here for my guesses on this subject

Wednesday, May 17, 2023

AI "Godfather" retires & voices fears about AI dangers

This post is still undergoing correction and enhancement. 

Do AI systems have a sense of self?

What worries me about these robots is not that they are robots, but 
they look too close to humanity, which probably means they are made in the
image of man and therefore as they seek their goals they share 
human moral & epistemic limitations. 


This BBC article is about the retirement of Artificial Intelligence guru Geoffrey Hinton. Here we read:

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field. Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

Before I proceed further with this article, first a reminder about my previous blog post where I mentioned the "take home" lessons from my AI "Thinknet" project. Any intelligence, human and otherwise has to grapple with a reality that, I propose, can be very generally represented in an abstracted way as a rich complex of properties distributed over a set of items: Diagrammatically:

Intelligence perceives any non-random relations between properties and then draws conclusions from these relations*.  These relations can be learnt either by a) recording the statistical linkages between properties and selecting out and remembering any significant associations from these statistics or b) by reading text files that contain prefabricated associations stated as Bayesian probabilities. Because learning associations from statistics is a very longwinded affair I opted for text-file learning. Nobody is going to learn, say quantum theory, from direct experience without a very long lead time.

The world is full of many different "properties" and myriad instantiations of those properties coming into association.  As any intelligence has, naturally, a limited ability to freely sample this complex structure intelligence tends to be error prone as a result of a combination of insufficient sampling and accessibility issues**; in short intelligence faces epistemic difficulties. However, if experience of the associations between properties is accumulated and tested over a long period of time and compiled as reviewable Bayesian text files this can help mitigate the problems of error, but not obviate it completely. In a complex world these text files are likely to remain partial, especially if effected by group dynamics where social pressures exist and group think gets locked in. 

The upshot is that an intelligence can only be as intelligent as the input from its environment allows. In the extreme case of a complete information black-out it is clear that intelligence would learn nothing and like a computer with no software could think nothing; the accessibility, sample representativeness and organization of the environment in which an intelligence is emersed and attempts to interrogate, sets an upper limit on just how intelligent an intelligence can be. 

The weak link in the emergence of text-dependent intelligence are those Bayesian probabilities - millions of them: They may be unrepresentative, too many of them, or too few of them. They will have a tendency to be proprietary, depending on the social circles in which they are compiled. They may be biased by various human adaptive needs; like for example the need to appear dogmatic and confident if one is a leader or the need to attach oneself to a group and express doctrinal loyalty to the group in return for social support, validation & status. Given that so much information comes via texts rather than a first-principle contact with reality one of the weak links is that these text files are compiled by interlocutors whose knowledge may be compromised by partial & biased sampling, group dynamics and priority on adaptative needs. This may well be passed on to any AI that reads them.

In short AI, in the light of these considerations, may well be as ham-strung as humanity in the forming of sound conclusions from text files; the alternative is to go back to the Stone Age and start again by accumulating knowledge experimentally; but even then reality may not present a representative cross-section of itself. 

 ***

The Venn diagram and the gambling selection schemes theorem are key to understanding this situation. The crucial lesson is that everybody should exercise epistemic humility because the universe only reveals so much about itself; it need not reveal anything, but providentially it reveals much. Let's thank The Creator for that.

Finally let me repeat my two main qualifications about current AI: 

a) In my opinion Digital AI, is only a simulation of biological intelligence: It is not a conscious intelligence: For consciousness to result one would have to use atoms and molecules in the way biology uses them. (See the last chapter in this book for my tentative proposal for the physical conditions of consciousness)

b) Nevertheless, my working hypothesis is that biological intelligence is not so exotic in nature that it can't be simulated with sufficiently sophisticated algorithms. For example, I think it unlikely that biological intelligence is a non-computable phenomenon - see here for my reasons why. The de-facto North American Intelligent Design community have painted themselves into a corner in this respect in that they have become too committed to intelligence being something exotic. This is a result of an implicit philosophical dualism which makes a sharp demarcation between intelligence and other phenomena of the created world. This implicit dualist philosophy has been built into their "Explanatory Filter". They appear unaware of their dualism.

So, with these thoughts in mind. let my now go onto to add comment to the BBC article: 

***


BBC: In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. 

MY COMMENT: That "similarity", as I've said before, is in formal structure rather than being qualitatively similar; that is, it is a simulation of human thinking. A simulation will have isomorphisms with what's being simulated but will differ from the object being simulated in that its fundamental qualitative building blocks are of a different quality.  For example, an architect's plan will have a point-by-point correspondence with a physical building, but the stuff used in the plan is of a very different quality to the actual building. It is this difference in quality which makes a simulation a simulation rather than being the thing-in-itself. To compare an architect's plan with a dynamic computer simulation might seem a little strained, but that's because a paper plan is 3-dimensional and lacks the fourth dimensions of time. Current digital AI systems are dynamic simulations in that they add the time dimension: But they do not make use of the qualities of fundamental physics which if used rightly, I propose, results in conscious cognition.

BBC: The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds. "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said. "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

 MY COMMENT: The level of information held in a library or on the Web, at least potentially, exceeds the information that the human brain holds, so the first statement above is not at all startling. But if one characterizes the information a human mind can access via a library or their iPhone or their computer as off-line information accessible via these clever technological extensions of the human mind this puts human information levels back into the running again. After all, even in my own mind there is much I know which takes a little effort and time to get back into the conscious spotlight and almost classifies as a form of off-line information. 

Yes, I'd accept that AI reasoning has room for (possible) enhancement and may eventually do better than the human mind, just as adding machines can better humans at arithmetic. But why do we need worry? The article suggests why......

BBC: In the New York Times article, Dr Hinton referred to "bad actors" who would try to use AI for "bad things". When asked by the BBC to elaborate on this, he replied: "This is just a kind of worst-case scenario, kind of a nightmare scenario. "You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals." The scientist warned that this eventually might "create sub-goals like 'I need to get more power'".

 MY COMMENT: Well yes, I think I agree! As soon as man discovered that a stone or a stick could be used as a weapon bad actors were always going to be a problem. Today of course we've got the even more pressing problem of bad actors with AR15s running amok and even worse, despots in charge of nuclear weapons. But is Hinton right about the creation of robots with power seeking sub goals? May be, but if that can happen then somebody might create robots with the sub-goal of destroying robots that seek power! Like other technology, AI appears to be just another case of an extension of the human power to enhance and help enforce its own high-level goal seeking. It is conceivable, however, that either by accident or design somebody creates an AI that has a high-level goal of seeking its own self-esteem & self-interests above all other goals: This kind of AI would have effectively become a complex adaptive system in its own right; that is a system seeking to consolidate, perpetuate & enhance its identity. But by then humans would have at their disposal their own AI extension to the human power to act teleologically. The result would be like any other technological arms race; a dynamic stalemate; a likely result if both sides have similar technology. So, it is not at all clear that rampaging robots, with or without a bad acting human controller, would inevitably dominate humanity. However, I agree, the potential dangers should be acknowledged: Those dangers will be of at least three types: a) AI drones out of control of their human creators (although I feel this to be an unlikely scenario) b) Probably more relevant will be what so often new technology has done in the past: Viz; Shifting the production goal posts and resulting in social disruption and displacement. c) Abuse of the technology by "bad actors".

But much of Hinton's thinking about the dangers of AI appears to be predicated on implicit assumptions about the possibility of AI having a clear sense of its identity; that is a self-aware identity. A library of information may have a clear identity in that its information media is confined withing the walls of a building, but the question of self-aware identity only comes to the fore when the library holds information about itself.  Hinton's fears rest on the implicit assumption that an AI system can have a  self-aware sense of individual identity, that is, a sense of self and the motivation which seeks to perpetuate and enhance that self.  Without that sense of identity AI remains just a very dynamic information generator; in fact like  public library in that it is open to everyone, but with the addition of very helpful and intelligent assistants attached to that library.  But if an AI system has a notion of self and therefore capable of forming the idea that its knowledge somehow pertains to that self, perhaps even believe it has property rights over that knowledge, we are then in a different ball gameThis sense of self and ownership is in fact a very human trait, a trait which potentially could be passed on by human programming (or accidental mutation & subsequent self-perpetuation? *). The "self" becomes the source of much aggravation when selves assert themselves over other selves. Once again, we have a problematical issue tracing back to and revolving round the very human tendency to over assert the self at the cost of other selves as it seeks self-esteem, ambition, status & domination. In a social context the self has the potential to generate conflict. In human society a selfish identity-based teleology rules OK - if it can.  As the saying goes "Sin" is the word with the "I" in the middle. But the Christain answer is not to extinguish the self but to bring it under control, to deny self when other selves can be adversely affected by one's self-assertion.  (Philippians 2:1-11). 

BBCHe added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have. "We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

MY COMMENT: This data sharing idea is valid. In fact, that is exactly what humans have themselves done in spreading of technological know-how via word of mouth and information media. Clearly this shared information will be so much more than anyone person can know, but we don't lose sleep over that because it is in the public domain and in the public interest; it is as it were off-line information available to all should access be required. In this sense there are huge continents of information available on the internet. Here the notion that that information is the property belonging to someone or something is a strained idea. Therefore, in what sense wouldn't the information gained by 10,000 webbots also be my knowledge? If we are dealing with a public domain system this is just what technology has always been: Viz: a way of extending human powers. Energetically speaking a car is so much more powerful than a human but the human is in the driving seat, and it is the driver, and not the car, who has a strong sense of individual identity and ownership over the vehicle. Likewise, I have an extensive library of books containing much information unknown to me, although it is in principle available to me should I want to interrogate this library using its indices. It would be even better if all this information was on computer, and I could use clever algorithms to search it, and even better if I could use a chatbot; this would extend my cognitive powers even further.  But such clever mechanical informational aids don't necessarily mean they also simulate a strong sense of self; all we can say at this stage is that they are testament to the ability of human beings to extend their powers technologically, whether those powers pertain to mental power or muscular power. 

However, I would accept that it may well be possible to simulate computationally a strong sense of self And sagain, Hinton's diffidence &/or fear that digital systems can know so much more than any one person only has serious implications if that knowledge is attached to an intelligence (human or otherwise) which has a strong sense of personal identity, ownership and a strong motivation toward self-betterment over and against other selves. Since information is power, the hoarding and privatization of such information would then be in its (selfish) interests.  Only in this context of self-identity does the analogy of a large public library staffed by helpful slave assistants breakdown. Only in this context can I understand any assumption that the knowledge belongs to one AI self-aware identity. This very human concept of personal ambition & individual identity appears to be behind Hinton's fears although he doesn't explicitly articulate them. With AI it is a very natural to assume we are dealing with a self-aware self; although that need not be the case: It is something which has to be programmed in. 

If there is a powerful sense of individual identity which wishes to hoard knowledge, own it and privatize it that sounds a very human trait and if this sense of individualism and property was delegated to machinery it is then that fears about AI may be realized.  But until such happens AI systems are just an extension of human powers and identity. 

Let's also recall where chatbot information is coming from: it's largely coming from the texts of human culture: Those texts contain errors and naturally AI systems will inherit those errors. An AI system can only be as intelligent as its information environment allows. Moreover, as we live in a mathematically chaotic reality it is unlikely that AI will achieve omniscience in terms of its ability to predict and plan; It is likely then that AI, no more humanity, will be able to transcend the role of being a "make-it-up-as-you-go-along" complex adaptive system. 

BBCMatt Clifford, the chairman of the UK's Advanced Research and Invention Agency, speaking in a personal capacity, told the BBC that Dr Hinton's announcement "underlines the rate at which AI capabilities are accelerating". "There's an enormous upside from this technology, but it's essential that the world invests heavily and urgently in AI safety and control," he said.

Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.

'We need to take a step back' In March, an open letter - co-signed by dozens of people in the AI field, including the tech billionaire Elon Musk - called for a pause on all developments more advanced than the current version of AI chatbot ChatGPT so robust safety measures could be designed and implemented.

Yoshua Bengio, another so-called godfather of AI, who along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning, also signed the letter.

Mr Bengio wrote that it was because of the "unexpected acceleration" in AI systems that "we need to take a step back".

But Dr Hinton told the BBC that "in the shorter term" he thought AI would deliver many more benefits than risks, "so I don't think we should stop developing this stuff," he added.

He also said that international competition would mean that a pause would be difficult. "Even if everybody in the US stopped developing it, China would just get a big lead," he said.

Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed "with a lot of thought into how to stop it going rogue".

MY COMMENT: I think I would  largely agree with the foregoing. The dangers of AI are two fold:

1. AI, like all other human technology, is an extension of human powers and it is therefore capable of extending the powers of both good and bad actors: The latter, is a social problem destined to be always with us.

2. Those human beings, who are effectively creating AI in their own image may create AI systems with a sense of self with the goal of enhancing their persona where self-identity will take precedence over all other goals. 

My guess is that the danger of AI going rogue and setting up business for its own ends is a lot less likely than AI being used to extend the powers of bad human actors. Nevertheless, I agree with Hinton that we should continue to develop AI, but be mindful of the potential pitfalls. Basically, the moral is this: read Hinton's "with a lot of thought into how to stop it going rogue" as "with a lot of thought into how to stop it becoming too dangerously human and at the disposal of bad actors". The danger is endemic to humanity itself and the irony is that the potential dangers exist because humans create AI in their own image and/or AI becomes an extension of the flawed human will. Thus Hinton's fears are grounded in human nature, a nature that all too readily asserts itself over and above other selves, a flawed nature that may well be passed on to AI systems built in the image of humanity with the same old goals of adapting and preserving an individual identity. Christianity calls those flaws in human nature "Sin", the word with the "I" in the middle. We all have a sense of individuality and a conscious sense of self: That individuality should not be extinguished, but when called for self should be denied in favour of other selves in a life of service (Phil 2:1-11).


Footnote:

* Venn diagrams don't have the facility to form a set of sets. However, this can be achieved using another "higher level" Venn diagram; we thus have Venn diagrams that are about other Venn diagrams. See here:

http://quantumnonlinearity.blogspot.com/2016/05/the-thinknet-project-footnote-on-self_11.html

** Epistemic difficulties surrounding accessibility and signal opacity loom large in historical research. "Epistemic distance" is a big issue in human studies. 

Another BBC link:

Saturday, April 08, 2023

The AI Question and ChatGPT: "Truth-Mills"?

 


Everyone seems to be complaining about the Artificial Intelligence application "ChatGPT" : From passionate leftwing atheist PZ Myers through moderate evangelical atheist Larry Moran to cultish fundamentalist Ken Ham it's nothing but complaints!

PZ Myers refers to ChatGPT as a Bullsh*t fountain. Also, in a post titled "How AI will destroy us" he blames capitalism and publishes a YouTuber who calls AI "B.S.".  Biochemist Larry Moran gives chatGPT a fail mark on the basis that it is "lying" about Junk DNA (that's also a complaint of Myers, although "lying" is rather too anthropomorphic in my opinion). The Christian fundamentalists go spare and lose it completely: Kentucky theme park supremo Ken Ham, in a post titled "AI - It's pushing an Anti-God Agenda" (March 1st) complains that ChatGPT isn't neutral but is clearly anti-God - what he means by that is that its output contradicts Ken's views! We find even greater extremism in a post by PZ Myers where he reports Catholic Michael Knowles claiming that AI may be demonic!  Ken Ham is actually much nearer the mark than Knowles when Ken tells us that AI isn't neutral: The irony is that although we are inclined to think of AI as alien, inhuman, impartial and perhaps of superhuman power, it is in fact inextricably bound up with epistemic programming that has its roots in human nature and the nature of reality itself. It is therefore limited by the same fundamental epistemic compromises that necessarily plague human thinking**.  Therefore like ourselves AI will, of necessity, hold opinions rather than detached cold certainties. Let me expand this theme a bit further. 

***

From 1987 onwards I tried to develop a software simulation (eventually written in C++) of some of the general features of intelligence. I based this endeavor on Edward De Bono's book "The Mechanism of Mind". I tell this story in my "Thinknet" project and although it was clear that it was the kind of project whose potential for further development was endless, I felt that I had taken its development far enough to understand the basics of how intelligence might form an internal model of its surroundings. The basic idea was simple: it was based on the generalised Venn diagram. Viz:


The whole project was predicated on the assumption that this kind of picture can be used as a general description of the real world. In this picture a complex profusion of categories is formed by properties distributed over a set of items*. If these properties are distributed randomly then there are no relations between them and it is impossible to use any one property as the predictor of other properties. But our world isn't random; rather it is highly organized, and this organization means that there are relationships between properties which can be used predictively. As I show in my Thinknet project the upshot is that the mechanism of mind becomes a network of connections representing these nonrandom relations.  The Thinknet project provides the details of how a model of thinking can be based on a generalised Venn diagram.

One thing is fairly obvious; if we have many items and many properties a very complex Venn picture may emerge and the epistemic challenge then arises from the attempt to render this picture as a mental network of connections.  Epistemically speaking both humans and AI systems suffer from the same limitations: In trying to form a network of connections they can only do so from a limited number of experiential samples.  This would be OK if the world was of a relatively simple organization, but the trouble is that yes, it is highly organised but it is not simple; it is in fact both organized and yet very, very complex. Complexity is a halfway house between the simplicity of high order and the hyper-complexity of randomness. To casual observers, whether human or AI, this complexity can at first sight look like randomness and therefore present great epistemic challenges in trying to render this world as an accurate connectionist model given the limits on sampling capacity. On top of that let's bear in mind that many of the connections we make don't come from direct contact with reality itself but are mediated by social texts. In fact in the case of my Thinknet model all its information came from compiled text files where the links were already suggested in a myriad Bayesian statements. This "social" approach to epistemology is necessary because solitary learning from "coalface" experience takes far too long; that would be like starting from the Paleolithic. 

Like Thinknet we learn far more from social texts than we do from hands-on experience.  Those social "text files" are extremely voluminous and take a long time to traverse. There is no quick fix that allows this textual experience to be by-passed. This immediately takes us into the realm of culture, group dynamics and even politics where biased sampling is the natural state of human (& AI) affairs. The complex mental networks derived from culture means that intelligence, both human and AI, is only as good as the cultural data and samples they receive. So, in short, AI, like ourselves, is going to be highly opiniated, unless AI has got some kind of epistemic humility built into its programming. AI isn't going to usher in a new age of unopinionated and error-unadulterated knowledge objectively derived from mechanical "Truth-Mills". The age old fundamental epistemic problems will afflict AI just as it afflicts human beings: PZ Myers might call ChatGPT a Bullsh*t fountain, but then that's more or less also his opinion of the Ken Hams, the Michael Knowles and Donald Trumps of this world. On that matter he is undoubtedly right! As with humanity (e.g. Ken Ham) then so with ChatGPT. The bad news for PZ Myers is that Bullsh*t production has now been automated!


Truth Mills: Is AI going to automate the production of theoretical fabric?


ChatGPT for dogs

Footnotes:

* Venn diagrams don't have the facility to form a set of sets. However, this can be achieved using another "higher level" Venn diagram; we thus have Venn diagrams that are about other Venn diagrams. See here:

Quantum Non-Linearity: The Thinknet Project. Footnote: On Self Description (quantumnonlinearity.blogspot.com)

** Epistemic difficulties surrounding accessibility and signal opacity loom large in historical research. "Epistemic distance" is a big issue in human studies. 

Friday, September 03, 2021

Evolution and Islands of functionality



I've said it before and I'll say it again: William Dembski, the North American "Intelligent Design"  guru, is a nice bloke and in many ways an admirable Christian; moreover, I think one of his primary publicized conclusion is entirely correct; that is, a universe such as ours, especially given the presence of life, demands a huge upfront information input. Unless we are going to invoke multiverse ideas this is a truism whether or not life is a product of the mechanisms of evolution as conventionally conceived (But see here for qualification). Dembski is also a reasonable Christian who disowns the fundamentalism abroad among many US Christians. But in spite of all this he has been rejected and even abused among some evolutionists of the academic establishment, especially by evangelical atheists. This is at least in part because some in the IDist community have assumed his work is a sure fire refutation of standard evolutionary mechanisms. But Dembski's main conclusion isn't such a refutation. In fact Dembski has given a back-handed acknowledgement of this fact. 

As I described in my last blog post there are big stakes here as a consequence of the US right-wing IDists and the atheists in the academic establishment polarizing around what they both believe to be a sharp dichotomy between "natural forces" and "intelligent agency". But the neutrality of Dembski's initial conclusions doesn't mean that Dembski is what the IDists contemptuously refer to as a "Darwinist"; rather he very much aligns with the IDist community and argues against standard evolutionary mechanisms as we shall see in this post. 

Given the establishment vs popularist right-wing polarisation in the US, it is not surprising if Dembski has been embraced by the right-wing and he has turned his talents toward supporting some of their contentions. For example in this blog post of his we find him entertaining (but falling short of outright affirmation of) the theory that Covid 19 was genetically engineered in China. His post will go down well among Trump right-wingers. In fact I'd be interested to know whether or not Dembski is a Trump supporter and believes in a stolen election. 

For myself I have no useful input on theory that Covid 19 was genetically engineered in a Chinese laboratory and then perhaps accidentally released. It is a plausible theory that may or may not be true as far as my knowledge is concerned. Unfortunately the authoritarian and secretive  nature of the Chinese regime doesn't help their case one little bit: It would be typical of a totalitarian government with little or no accountability to host a classic cock-up and cover up scenario like a laboratory escape. But if Covid 19 is a Chinese contrivance I think it unlikely it was deliberately released; that idea just smacks too much of the cold hearted Machiavellian fantasies spread about by the deluded conspiracy theorists; I find incompetence and cover up scenarios much more plausible and in line with humanity's often sleazy and idiotic behavior. In any case it cuts both ways; that lab-leak theories serve right-wing tribal interests erodes the credibility of these theories. But I'm less interested in this issue than Dembski's references to the evolution question.

***

So, as I was saying, Dembski's main work doesn't contradict standard evolution:  But even so, as I've said, Dembski, of course, finds himself on the anti-evolution side of the culture war and naturally enough has tried to advance arguments which attempt to refute evolution. In his Covid 19 post he does a resume of a frequent argument used by IDists. In his post we find the picture I've published at the head of this post and Dembski tell us about it:

This first slide illustrates, by analogy, what the Darwinist thinks must be the case, namely, that islands of functionality exist dotted along the way in getting from the left most island to the farthest off island. With all these intermediate islands, it is easy (probable) to jump from one island to the next and thus get to the far-off island by starting with the closest one (the far-off island representing the end product of evolution that we’re trying to explain).

Yes I agree, each organic variation that walks the Earth must be functional and able to transmit incremental variations to the next generation that themselves must also be functional.  Evolution is a step by step gradual process that doesn't conceive huge organic variations appearing in one generation. e.g. Lobe finned fish didn't become amphibians in just one generation; that would require millions of years of step by step change, where each step is capable of survival and replication. 

But Demsbki goes on to give us this second picture to ponder.....: 



According to Dembski this picture illustrates the possible problem with standard evolutionary mechanisms that depend on the small jumps of incremental change. Of this matter he says this:

But how do we know that those intermediate islands exist? The second slide illustrates this possibility, and insofar as it describes what is happening with biological change, it renders Darwinian evolution far less plausible. It needs to be noted here that whether these transitional islands (i.e., intermediate functional biological forms) exist is a matter for fact. The dispute between design theorists and Darwinists is over the evidence for these intermediate islands/forms. For the Darwinists, these intermediates must exist because Darwinism requires a gradual form of evolution. For the design theorists it’s not that these intermediates can’t exist but that they might not exist and if they don’t, that argues for intelligent design.

Yes again I agree: For the Darwinists, these intermediates must exist because Darwinism requires a gradual form of evolution. The battle between IDists of Demsbski's variety and the establishment evolutionists revolves round the attempts on the one-hand of IDists to show that there is no evidence for this "island" hopping scenario and on the other hand evolutionists trying to show that there is evidence of the existence of closely set islands of functionality.  The IDists, of course, are quite sure that islands of functionality are not closely set enough to facilitate evolution and they then invoke their so-called "explanatory filter" and out pops intelligent agency (I believe to this explanatory filter to be flawed if pushed beyond everyday application into the realms of the origins of life - see here for more details). 

***

But  there is one thing that Dembski's island metaphor hasn't made sufficiently explicit in my opinion. In the first picture above it could be that the sea is actually very thickly populated with islands of functionality and that the distance between these islands is a small configurational step. And yet this in itself, although a necessary condition for evolution, isn't a sufficient condition. This is because the islands may be so small that a random hop has very little chance of landing on any of these tiny islands of functionality. Actually, if one blows up the magnification of this "many small islands" picture it starts to look a little like Dembski's second picture with islands well separated. In fact it's vaguely reminiscent of what one sees of a galaxy in space - from a distance they look to be crowded with closely set stars - but blow up the magnification and one finds the stars to be very small and too far apart for space travel. Likewise, there may well be many islands of functionality and not very distant from one another in terms of steps but because they occupy such a small area in the "sea of non-functionality" random island hopping is too improbable to be practical.

One way of thinking about this situation is to understand that organisms, because they are composed of many particles, are actually multidimensional entities with huge numbers of dimensions. There may be many functional configurations within a few short steps but nevertheless too few, given the number of dimensions, to be accessible with small random hops; the overwhelming number of short hops will go in the wrong direction.

I actually much prefer what I call the "spongeam" picture to Dembski's first figure above. I have featured the spongeam structure on this blog several times before. It looks something like this:

In this metaphor we are in 3D rather than 2D, although of course we should be talking about a configuration space of immense dimensionality and where the spongeam structure is considerably more tenuous looking than it looks in the picture above.  However, the spongeam metaphor, in my opinion, conveys, the complexity of the situation better than the island picture. In the spongeam picture I identify the necessary condition for standard evolutionary mechanisms to be that the class of functional, self perpetuating organisms form a connected set in configuration space, resulting in a thin, tenuous, but complex network of fibrils spanning a space of immense dimensionality.  In this picture the random walk steps of evolution are modeled as a form of diffusion guided by the thin connections (or channels) of the spongeam. If the spongeam exists then the mechanism of evolution is a process of diffusion through this network of channels. Also, as I've remarked before, one can express this metaphor for evolution mathematically. Viz: 



I explain this equation more fully in this blog post.  Suffice to say here that Y represents some kind of population density at a point in configuration space. The first term on the righthand side is a diffusion term resulting of the random hops across the space. The second term on the righthand side represents a breeding or decaying population term, where V is a value which varies across configuration space. It is this value which describes the spongeam structure, a structure which must be sufficiently connected to provide the necessary conditions for standard evolutionary mechanisms. It embeds the upfront "Dembski information" required for those mechanisms to work.

Like Dembski, I have doubts that this necessary condition is actually fulfilled given our current understanding of the physical regime in spite of the stringent constraint that the known laws of physics put on the possible behaviours in configuration space. My feeling is, and I admit it's only a intuition, that the high organisation of life means that the number of possible organic structures are likely to be overwhelmed by the number of possible disordered configurations. That is, notwithstanding the known laws of physics which considerably reduce the "volume" of configuration space, there simply aren't enough viable organic configurations to populate configuration space with an extensive connected structure like the spongeam, a structure which is a necessary condition for molecules to man evolution. So, it may be that IDists like Dembski are actually right. But having said that I don't think the case against evolution is actually proved and standard evolutionary mechanism may yet be the engine driving natural history. I'm not strongly aligned on this question.

It is ironic that in one sense IDists of Dembski's ilk would likely agree with the academic establishment on one very important aspect of evolution; namely, that the fossil record testifies to a natural history of changing life forms over millions of years; So, in the natural history sense they both accept that evolution has occurred although would disagree on the underlying driving mechanisms: A further irony here is that the mechanisms of evolution, when stated in their most general form, even by an evangelical atheist biochemist like Larry Moran, admits intelligent design as a possible driving mechanism - see here. I wonder if Moran is aware of this? 

So, at heart the contention between IDists and establishment evolutionists is about the nature of the internal engine driving evolutionary change. But I have doubts that this contention can ever be settled conclusively given that fossil, genetic & breeding data can only ever be a way of sampling the highly complex processes of natural history. I'll have to leave the two sides arguing the evidence for that one, although I'm inclined to float my vote against the spongeam as a reality and yet at the same time stand with the evolutionists in the culture war against an extreme right-wingery which of late has manifested itself as a threat to Western democracy.

***

If the class of functional structures are closely spaced but occupy too small a "volume" in configuration space to be reachable by evolutionary diffusion, there may yet be a way round this situation, one that I've probed for many years (although without unambiguous success). If some kind of tentative expanding parallelism is in operation which probes a few steps across configuration space these islands of functionality could then be reached. But accompanying this there would have to be some underlying drive to preferentially select these islands and that aspect, which implies a built-in teleology in the physical regime, would have to exist. Quantum mechanics gives us expanding parallelism straight away; it also gives us the selection too, in the form of collapse of the wavefunction (I'm by-passing multiverse & decoherence interpretations of quantum mechanics here). But apparently (and I stress apparently) quantum selection isn't preferentially biased but random. - as far as we know. Notwithstanding that however, my radical suggestion is that there is an underlying teleology in the cosmos, a teleology embodied in a biased seek, inspect, reject and select algorithm behind the generation of life. In effect  this would both considerably speed up the diffusion and introduce a life favoring value of V in the above equation. 

Well, I suppose it's likely I'm on a hiding to nowhere here, but in the poisonous atmosphere of a  polarised culture-war, to even tentatively investigate such ideas is an affront to the hardened nihilistic atheists and a heresy to the hardened dualists among right-wing IDists & fundamentalists. Just as well I'm in a relatively unconnected domain on this part of the web; I'm not keen on meeting them. (I've already had three unpleasant chance web-meetings with fundamentalists and/or conspiracy theorists - see Richard Sweet,  Steve Pastry and Ken Ham)

For evangelical atheists whose world is ultimately meaningless these ideas would smack too much of intelligent contrivance & purpose to be acceptable. Take an atheist like Steven Weinberg for whom the universe is absurd: As his famous saying goes "The more the universe seems comprehensible, the more it seems pointless".  And yet I can empathize to some extent with Weinberg: From a human perspective science and industry have left us with enigmas & challenges to deal with. But even so perhaps Weinberg could have asked himself more questions about the origins of a comprehensible cosmic order rather than jumping to the conclusion that it's all absurd.

It is ironic that for Christian IDists and fundamentalists the thought of a cosmos provisioned to generate life via a teleological version of evolution is, if anything, an even greater affront than it is to nihilistic atheists; the latter will brush you off as a fool, whereas right-wing religionists are inclined to see you as a subversive, may be even a malign & wicked influence. For one thing they have difficulty with the notion that God's creation is able to create of information. But creating information is what teleological algorithms achieve. There is nothing intrinsically anti-Christian in seeing human beings as a thoroughly "natural" (sic) phenomenon; for as far as we know they are a dynamical pattern that works within the operational envelope of a physical regime that the sovereign Creator has set up and manages on a moment by moment basis. In that sense human beings are at once both natural and supernatural. Moreover, as a "natural" phenomenon human beings (like natural history itself) are clearly able to create information on a daily basis. But the dualistic religionists who have committed themselves to the notion that intelligence is tantamount to some form of mysterious intellectual alchemy that cannot be described in algorithmic and material terms have backed themselves into a corner: Their vested interest in a particular line of thought brings down a taboo on any suggestion that in God's "natural" (sic) world information is being created all the time. Why is this such a difficult idea given that God is sovereign and it is God's created world? The temptations of gnosticism are never far away

Sunday, April 26, 2020

Intelligence, Oracles, Magic and Politics


The de facto ID concept of intelligence.

As I have remarked many times on this blog the de facto Intelligent Design movement affects to leave the internal details of the "intelligence" they believe to have stepped in and directly created life as a mystery. There is some justification in this policy: When handling great mysteries (e.g. Divinity) caution is sometimes the better part of valour and so it may be best to proceed apophatically; that is, to define the mystery in terms of what it isn't. An apophatic approach to intelligence seems to be stock in trade of the de facto ID community in North America. In fact as far as I can tell the mainstream IDists believe that the intelligent agent which created life is neither explicable in terms of so-called "natural forces" or even for that matter any process which has the potential to be expressed algorithmically no matter how complex that algorithm may be. I find their views a little ironic: As many of them make claim to a Christian faith one might think that those so-called "natural forces" which we as Christians believe to be God's sublime Creation may hold one or two surprises for us as to what these "forces" (under Sovereign management) can do; after all, Quantum Mechanics alone has left enlightenment humankind thoroughly perplexed as to what it all means (For a start it is no longer meaningful to talk of matter as having identity of substance; identity comes via configuration). But no, in the de-facto IDist world  the "profane natural forces" vs "sacred intelligent agency" dichotomy is their habitual thesis and anti-thesis. In their view "matter" is too debased and inferior to be a secondary source of the dignified sublimity of  mind.

So, in the light of all this I was not in the least bit surprised to find a post on the de facto Intelligent Design website Uncommon Descent with links to ID material  giving the clearest evidence I've yet seen that de facto ID prefers to think about true "Intelligence" as a property tantamount to a magical power, setting it apart from anything else we encounter in this world*1. The UD post in question alerts us to one of de facto ID's gurus who is attempting to identify human intelligence as having the ability to act as a "partial halting oracle". That is, it is assumed that human intelligence is an oracle which can in some (but not all) cases solve the halting problem. According to Wiki. the concept of an "Oracle" as used in computational theory is defined as follows:

An oracle machine can be conceived as a Turing machine connected to an oracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be a decision problem or a function problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box"  that is able to produce a solution for any instance of a given computational problem:

A "black box" capable of doing the right thing sums up those inscrutable oracular powers. This manoeuvre by an IDist guru well and truly places the essence of intelligence all but beyond analytical probing *2. As I have said many times before the de facto IDist's preference for an esoteric notion of intelligence traces back to their use of their "explanatory filter" which once it has been used to settle on intelligent agency as the cause of a pattern doesn't really allow one to proceed much further. This of course contrasts with my own approach to intelligence which doesn't resort to super-analytical processes; well nearly: In my Thinknet project I see intelligence as a teleologically driven search process by a "Thinknet" like system. Thinknet systems are potentially chaotic which means that they can amplify those quantum ambiguities up to the macroscopic level, ambiguities which if they remained un-collapsed would give us people who could be in two places at once. Well, we can't have that at the macroscopic level so if the mind is constantly collapsing those wave-function, then, I tender, it is this process of constant collapse which generates consciousness.  But if the mind amplifies those apparent random collapses  up to macroscopic level there is therefore the potential for it to manifest that great incomputable - absolute randomness; so in that sense mind has an incomputable aspect to it. Nevertheless, what I'm proposing is no blackbox concept of intelligence: I'm working on a notion of intelligence that is much more resolvable than ID's magical oracular black box and this is why I have to sophisticate the explanatory filter.

Turning to my subjective perspective on my own thought life I must say that it certainly doesn't feel like some magical oracle able to coolly solve a problem just like that! In contrast problem solving requires the hard graft of mental searching as one attempts to make connections which lead to solutions. To me my thought life feels much more like the seek, reject and select trial & error grind of a Thinknet search than it does ID's magical oracle where genius solutions just pop into the head. I see the hard work associated with thinking as a consequence of the overheard incurred by using a very general-purpose thinking system with a general purpose connectionist language to solve the generic problem; as this system is a jack-of-all trades-problem-solver it can be slow at solving specialised problems as it has to first translate the problem into its connecionist terms.

I don't have a strong claim to having clinched the essence of intelligence anymore than do the defacto IDists. But like myself they have just as much right to investigate an avenue of possibility in their search for what intelligence is about. In fact I believe their presence is a good thing; the more people investigating different avenues the better. For all I know the IDists might be right! Also, like the IDists I believe that intelligence of some sort underpins the nature of the cosmos.  So under any other circumstances I would applaud the IDists efforts at tentatively trying to move forward with something new; after all that's science for you.  But I'm afraid in this case I can't applaud. Why is that?

***

Well, the answer to that is politics especially the politics in North America. It's the catalyst that has precipitated and hardened a "natural forces" vs "intelligent agency" polarisation. The IDists are persona non grata among the academic establishment and so it is no surprise that these IDists have been tempted to put all their eggs into the "intelligence-is-magic" basket in order to batter academia's evolutionary and algorithmic rendering of the processes of life, processes the academics believe to have generated human intelligence. Some times I wonder if the de facto ID people aren't really being serious with their proposals and simply come up with their stuff just to rile the academic establishment!

But the politics doesn't stop there. IDism is all part of a greater right vs left wing tribal conflict which means that the right wing sharply disagree with the government tenured academics over one or more of a set of well contended issues (as mentioned in my last blog post): e.g. vaccinations, climate change, gay rights, deep government conspiracy theories, the regulatory role of government, the covid-19 lock down, hyper-market libertarianism, gun rights etc. The common underlying theme running through all this is the diffidence right-wingers have toward central government interventions; no!, make that the status quo interventions:  When it comes down to it the right-wing is just as capable of supplying individuals of totalitarian inclination as any other human sub-culture, if not more so. Do you think those characters one finds in America's quasi-militias would have the slightest respect for the argumentative cut and thrust of an authentic parliament? Unlikely: More to their taste would be for one of their plutocrats to do a Cromwell and clear parliament using AR-15 armed thugs.

Crackpot daftness can be found on the extremes of both left and right, but my argument here is with the right-wingery of the de facto ID community. Right wing sentiments ultimately drive their all but exclusive commitment to an Oracular paradigm of intelligence. They've backed  themselves into the cramped corner of this paradigm because they are suspicious of those government tenured academics who for the most part will get rubbed up the wrong way by de facto ID's support of oracular intelligence.

The republican language coming out of England's 1642 civil war fed into the American war of independence (from tax) and now the North American right-wing endlessly recapitulate the sentiments of this language Viz: interference coming from a tax funded government is at best regarded with suspicion and at worse as evidence of a deep government conspiracy.  For example, on Uncommon Descent one can find references to "climate change alarmism" and also "covid-19 lock down alarmism". The emotive term "alarmism" is the keyword expressing right-wing apprehensions about projects largely emanating from government sponsored tax funded bodies. In my view coordinating the social responses to the black swans of climate change and covid-19 requires centralised information and control; such a response is well beyond the powers of the sluggish market with its distributed blind-watch-maker decisions. But such government involvement is the right-winger's worst nightmare come true; especially if government should muff it (which they often do!)

The pretext supporting the "libertarian" polemic about covid-19 and climate change is, however, entirely plausible if not sound: The world's wealth generating markets could be so affected by central government policies that it causes huge economic hardship or perhaps even an apocalyptic economic collapse. But this line of argument cuts both ways. Covid-19 and climate change, if left to run their courses, could conceivably also cause economic collapse. Moreover, the right wing's emotive language can be used against them: One might accuse them of promulgating "economic hardship alarmism", or "totalitarian new world order alarmism". Both sides are faced by the same dilemma: The  fix may be worse than the problem!

Whilst I strongly reject the border-line Marxism and anti-theism found among some academics, neither can I support the right-wing affectation for so-called libertarianism. Libertarianism is to the free market as fundamentalism is to Christianity; they are the kiss of death for the things they purport to uphold. Sociopathic libertarianism is a source of social disaffection thus helping to serve up a discontented society on a platter to either Marxist or right-wing dictators. For example, allowing covid-19 to take its course is likely to strike harder among the poor than the rich and therefore this solution to our problems is readily perceived as the solution in favour of the rich. Moreover, self-branded "libertarianism" with its connotation of "liberty" comes under the heading of "self-praise is no recommendation": Looking at the mix of potential plutocrats, domineering characters and the well armed quasi-militias (in America) who make claim to the name "libertarian" it is easy to imagine a would be dictator arising from their ranks. And it wouldn't be the first time that "liberty" and "hegemony" have walked hand in hand; let's recall the outcomes of the English civil war of 1642, the French revolution of 1789, the October revolution and Mao's China. Idealism and hegemony are closely linked.

The many wildcards of socio-economics don't stop some people thinking they are clever predictors and planners. The open-endedness of socio-economic systems is a bottomless pit of new data that can be cherry-picked and tailored to support the favoured planning polemic. In a chaotic world human beings are necessarily complex adaptive systems and therefore by definition much better opportunists than they are planners. They make their decisions and take their opportunities on the hoof. Like other biological organisms society is a mix of central as well as distributed control and this mix no doubt better suits a chaotic world where black swans create new problems and at the same time deliver otherwise unforeseen opportunities. But the time honoured overriding concern of human beings is that of hanging on to the immediacies of survival at all costs and that's probably why many people favour social distancing rather than the long shot of saving an abstract economic system that more likely favours lining the rich man's pockets in his ivory tower before it gets to line your pockets (if you've survived covid-19!). While there's life there is hope, hope that the new opportunities open up into vistas of  fruitful originality and prosperity.  We can only plant and water; it is God that gives growth.


POSTSCRIPT 
27/4/20

In a post on Uncommon Descent that I wouldn't necessarily want to take issue with we find an interesting comment from a character called "Polistra". Viz:

Polistra April 26, 2020 at 2:48 pm
This is silly and illogical. It wasn’t the virus that stopped the world.
The virus just wandered around and found tissues to infect, and the humans who own the tissues killed the virus using standard weapons and tactics. A very few humans were unable to maintain the war, and they lost.
The world was stopped by GOVERNMENTS. The virus was just the latest fake “reason” for stopping the world.

This commentator doesn't like the fact that the UD post suggests 900 bytes of covid-19 DNA is the reason why the world has shut down. Polistra clearly wants a much clearer statement that the culpability lies with GOVERNMENTS.  Polistra doesn't tell us why governments want to shut the world down with what he calls a "fake" reason any more than flat earthers will tell us why the UN wants us to believe in a spherical earth instead of their flat earth. Although I don't think most UDers would go along with this kind of conspiracy theorism it's probably significant that they don't challenge him: He's one of them, he's part of their anti-government tribe!  The irony is, as I have already said, that it's so easy to see dictators readily emerging from the ranks of the domineering fanatical right wingers if they should ever get power.


Footnotes
*1 I'm not quite sure how this works out with human beings, objects which from a third person perspective are observed to be entirely a product of  complex organisations of  God's atoms.

*2 Turing's halting theorem and Godel's incompleteness theorem are closely related in that both use  the "runaway self-referencing" reasoning found in the diagonalisation procedure. Roger Penrose proposed that the human ability to understand Godel's argument proved that human thinking was an incomputable process. Hence Penrises ideas are also favoured by IDists. Whilst it is wrong to dismiss Penrose outright I have submitted my reasons why I don't follow him down this particular avenue..