Pages

Saturday, April 08, 2023

The AI Question and ChatGPT: "Truth-Mills"?

 


Everyone seems to be complaining about the Artificial Intelligence application "ChatGPT" : From passionate leftwing atheist PZ Myers through moderate evangelical atheist Larry Moran to cultish fundamentalist Ken Ham it's nothing but complaints!

PZ Myers refers to ChatGPT as a Bullsh*t fountain. Also, in a post titled "How AI will destroy us" he blames capitalism and publishes a YouTuber who calls AI "B.S.".  Biochemist Larry Moran gives chatGPT a fail mark on the basis that it is "lying" about Junk DNA (that's also a complaint of Myers, although "lying" is rather too anthropomorphic in my opinion). The Christian fundamentalists go spare and lose it completely: Kentucky theme park supremo Ken Ham, in a post titled "AI - It's pushing an Anti-God Agenda" (March 1st) complains that ChatGPT isn't neutral but is clearly anti-God - what he means by that is that its output contradicts Ken's views! We find even greater extremism in a post by PZ Myers where he reports Catholic Michael Knowles claiming that AI may be demonic!  Ken Ham is actually much nearer the mark than Knowles when Ken tells us that AI isn't neutral: The irony is that although we are inclined to think of AI as alien, inhuman, impartial and perhaps of superhuman power, it is in fact inextricably bound up with epistemic programming that has its roots in human nature and the nature of reality itself. It is therefore limited by the same fundamental epistemic compromises that necessarily plague human thinking**.  Therefore like ourselves AI will, of necessity, hold opinions rather than detached cold certainties. Let me expand this theme a bit further. 

***

From 1987 onwards I tried to develop a software simulation (eventually written in C++) of some of the general features of intelligence. I based this endeavor on Edward De Bono's book "The Mechanism of Mind". I tell this story in my "Thinknet" project and although it was clear that it was the kind of project whose potential for further development was endless, I felt that I had taken its development far enough to understand the basics of how intelligence might form an internal model of its surroundings. The basic idea was simple: it was based on the generalised Venn diagram. Viz:


The whole project was predicated on the assumption that this kind of picture can be used as a general description of the real world. In this picture a complex profusion of categories is formed by properties distributed over a set of items*. If these properties are distributed randomly then there are no relations between them and it is impossible to use any one property as the predictor of other properties. But our world isn't random; rather it is highly organized, and this organization means that there are relationships between properties which can be used predictively. As I show in my Thinknet project the upshot is that the mechanism of mind becomes a network of connections representing these nonrandom relations.  The Thinknet project provides the details of how a model of thinking can be based on a generalised Venn diagram.

One thing is fairly obvious; if we have many items and many properties a very complex Venn picture may emerge and the epistemic challenge then arises from the attempt to render this picture as a mental network of connections.  Epistemically speaking both humans and AI systems suffer from the same limitations: In trying to form a network of connections they can only do so from a limited number of experiential samples.  This would be OK if the world was of a relatively simple organization, but the trouble is that yes, it is highly organised but it is not simple; it is in fact both organized and yet very, very complex. Complexity is a halfway house between the simplicity of high order and the hyper-complexity of randomness. To casual observers, whether human or AI, this complexity can at first sight look like randomness and therefore present great epistemic challenges in trying to render this world as an accurate connectionist model given the limits on sampling capacity. On top of that let's bear in mind that many of the connections we make don't come from direct contact with reality itself but are mediated by social texts. In fact in the case of my Thinknet model all its information came from compiled text files where the links were already suggested in a myriad Bayesian statements. This "social" approach to epistemology is necessary because solitary learning from "coalface" experience takes far too long; that would be like starting from the Paleolithic. 

Like Thinknet we learn far more from social texts than we do from hands-on experience.  Those social "text files" are extremely voluminous and take a long time to traverse. There is no quick fix that allows this textual experience to be by-passed. This immediately takes us into the realm of culture, group dynamics and even politics where biased sampling is the natural state of human (& AI) affairs. The complex mental networks derived from culture means that intelligence, both human and AI, is only as good as the cultural data and samples they receive. So, in short, AI, like ourselves, is going to be highly opiniated, unless AI has got some kind of epistemic humility built into its programming. AI isn't going to usher in a new age of unopinionated and error-unadulterated knowledge objectively derived from mechanical "Truth-Mills". The age old fundamental epistemic problems will afflict AI just as it afflicts human beings: PZ Myers might call ChatGPT a Bullsh*t fountain, but then that's more or less also his opinion of the Ken Hams, the Michael Knowles and Donald Trumps of this world. On that matter he is undoubtedly right! As with humanity (e.g. Ken Ham) then so with ChatGPT. The bad news for PZ Myers is that Bullsh*t production has now been automated!


Truth Mills: Is AI going to automate the production of theoretical fabric?


ChatGPT for dogs

Footnotes:

* Venn diagrams don't have the facility to form a set of sets. However, this can be achieved using another "higher level" Venn diagram; we thus have Venn diagrams that are about other Venn diagrams. See here:

Quantum Non-Linearity: The Thinknet Project. Footnote: On Self Description (quantumnonlinearity.blogspot.com)

** Epistemic difficulties surrounding accessibility and signal opacity loom large in historical research. "Epistemic distance" is a big issue in human studies.