Pages

Wednesday, May 17, 2023

AI "Godfather" retires & voices fears about AI dangers

This post is still undergoing correction and enhancement. 

Do AI systems have a sense of self?

What worries me about these robots is not that they are robots, but 
they look too close to humanity, which probably means they are made in the
image of man and therefore as they seek their goals they share 
human moral & epistemic limitations. 


This BBC article is about the retirement of Artificial Intelligence guru Geoffrey Hinton. Here we read:

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field. Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were "quite scary". "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

Dr Hinton's pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

Before I proceed further with this article, first a reminder about my previous blog post where I mentioned the "take home" lessons from my AI "Thinknet" project. Any intelligence, human and otherwise has to grapple with a reality that, I propose, can be very generally represented in an abstracted way as a rich complex of properties distributed over a set of items: Diagrammatically:

Intelligence perceives any non-random relations between properties and then draws conclusions from these relations*.  These relations can be learnt either by a) recording the statistical linkages between properties and selecting out and remembering any significant associations from these statistics or b) by reading text files that contain prefabricated associations stated as Bayesian probabilities. Because learning associations from statistics is a very longwinded affair I opted for text-file learning. Nobody is going to learn, say quantum theory, from direct experience without a very long lead time.

The world is full of many different "properties" and myriad instantiations of those properties coming into association.  As any intelligence has, naturally, a limited ability to freely sample this complex structure intelligence tends to be error prone as a result of a combination of insufficient sampling and accessibility issues**; in short intelligence faces epistemic difficulties. However, if experience of the associations between properties is accumulated and tested over a long period of time and compiled as reviewable Bayesian text files this can help mitigate the problems of error, but not obviate it completely. In a complex world these text files are likely to remain partial, especially if effected by group dynamics where social pressures exist and group think gets locked in. 

The upshot is that an intelligence can only be as intelligent as the input from its environment allows. In the extreme case of a complete information black-out it is clear that intelligence would learn nothing and like a computer with no software could think nothing; the accessibility, sample representativeness and organization of the environment in which an intelligence is emersed and attempts to interrogate, sets an upper limit on just how intelligent an intelligence can be. 

The weak link in the emergence of text-dependent intelligence are those Bayesian probabilities - millions of them: They may be unrepresentative, too many of them, or too few of them. They will have a tendency to be proprietary, depending on the social circles in which they are compiled. They may be biased by various human adaptive needs; like for example the need to appear dogmatic and confident if one is a leader or the need to attach oneself to a group and express doctrinal loyalty to the group in return for social support, validation & status. Given that so much information comes via texts rather than a first-principle contact with reality one of the weak links is that these text files are compiled by interlocutors whose knowledge may be compromised by partial & biased sampling, group dynamics and priority on adaptative needs. This may well be passed on to any AI that reads them.

In short AI, in the light of these considerations, may well be as ham-strung as humanity in the forming of sound conclusions from text files; the alternative is to go back to the Stone Age and start again by accumulating knowledge experimentally; but even then reality may not present a representative cross-section of itself. 

 ***

The Venn diagram and the gambling selection schemes theorem are key to understanding this situation. The crucial lesson is that everybody should exercise epistemic humility because the universe only reveals so much about itself; it need not reveal anything, but providentially it reveals much. Let's thank The Creator for that.

Finally let me repeat my two main qualifications about current AI: 

a) In my opinion Digital AI, is only a simulation of biological intelligence: It is not a conscious intelligence: For consciousness to result one would have to use atoms and molecules in the way biology uses them. (See the last chapter in this book for my tentative proposal for the physical conditions of consciousness)

b) Nevertheless, my working hypothesis is that biological intelligence is not so exotic in nature that it can't be simulated with sufficiently sophisticated algorithms. For example, I think it unlikely that biological intelligence is a non-computable phenomenon - see here for my reasons why. The de-facto North American Intelligent Design community have painted themselves into a corner in this respect in that they have become too committed to intelligence being something exotic. This is a result of an implicit philosophical dualism which makes a sharp demarcation between intelligence and other phenomena of the created world. This implicit dualist philosophy has been built into their "Explanatory Filter". They appear unaware of their dualism.

So, with these thoughts in mind. let my now go onto to add comment to the BBC article: 

***


BBC: In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning. 

MY COMMENT: That "similarity", as I've said before, is in formal structure rather than being qualitatively similar; that is, it is a simulation of human thinking. A simulation will have isomorphisms with what's being simulated but will differ from the object being simulated in that its fundamental qualitative building blocks are of a different quality.  For example, an architect's plan will have a point-by-point correspondence with a physical building, but the stuff used in the plan is of a very different quality to the actual building. It is this difference in quality which makes a simulation a simulation rather than being the thing-in-itself. To compare an architect's plan with a dynamic computer simulation might seem a little strained, but that's because a paper plan is 3-dimensional and lacks the fourth dimensions of time. Current digital AI systems are dynamic simulations in that they add the time dimension: But they do not make use of the qualities of fundamental physics which if used rightly, I propose, results in conscious cognition.

BBC: The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds. "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said. "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

 MY COMMENT: The level of information held in a library or on the Web, at least potentially, exceeds the information that the human brain holds, so the first statement above is not at all startling. But if one characterizes the information a human mind can access via a library or their iPhone or their computer as off-line information accessible via these clever technological extensions of the human mind this puts human information levels back into the running again. After all, even in my own mind there is much I know which takes a little effort and time to get back into the conscious spotlight and almost classifies as a form of off-line information. 

Yes, I'd accept that AI reasoning has room for (possible) enhancement and may eventually do better than the human mind, just as adding machines can better humans at arithmetic. But why do we need worry? The article suggests why......

BBC: In the New York Times article, Dr Hinton referred to "bad actors" who would try to use AI for "bad things". When asked by the BBC to elaborate on this, he replied: "This is just a kind of worst-case scenario, kind of a nightmare scenario. "You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals." The scientist warned that this eventually might "create sub-goals like 'I need to get more power'".

 MY COMMENT: Well yes, I think I agree! As soon as man discovered that a stone or a stick could be used as a weapon bad actors were always going to be a problem. Today of course we've got the even more pressing problem of bad actors with AR15s running amok and even worse, despots in charge of nuclear weapons. But is Hinton right about the creation of robots with power seeking sub goals? May be, but if that can happen then somebody might create robots with the sub-goal of destroying robots that seek power! Like other technology, AI appears to be just another case of an extension of the human power to enhance and help enforce its own high-level goal seeking. It is conceivable, however, that either by accident or design somebody creates an AI that has a high-level goal of seeking its own self-esteem & self-interests above all other goals: This kind of AI would have effectively become a complex adaptive system in its own right; that is a system seeking to consolidate, perpetuate & enhance its identity. But by then humans would have at their disposal their own AI extension to the human power to act teleologically. The result would be like any other technological arms race; a dynamic stalemate; a likely result if both sides have similar technology. So, it is not at all clear that rampaging robots, with or without a bad acting human controller, would inevitably dominate humanity. However, I agree, the potential dangers should be acknowledged: Those dangers will be of at least three types: a) AI drones out of control of their human creators (although I feel this to be an unlikely scenario) b) Probably more relevant will be what so often new technology has done in the past: Viz; Shifting the production goal posts and resulting in social disruption and displacement. c) Abuse of the technology by "bad actors".

But much of Hinton's thinking about the dangers of AI appears to be predicated on implicit assumptions about the possibility of AI having a clear sense of its identity; that is a self-aware identity. A library of information may have a clear identity in that its information media is confined withing the walls of a building, but the question of self-aware identity only comes to the fore when the library holds information about itself.  Hinton's fears rest on the implicit assumption that an AI system can have a  self-aware sense of individual identity, that is, a sense of self and the motivation which seeks to perpetuate and enhance that self.  Without that sense of identity AI remains just a very dynamic information generator; in fact like  public library in that it is open to everyone, but with the addition of very helpful and intelligent assistants attached to that library.  But if an AI system has a notion of self and therefore capable of forming the idea that its knowledge somehow pertains to that self, perhaps even believe it has property rights over that knowledge, we are then in a different ball gameThis sense of self and ownership is in fact a very human trait, a trait which potentially could be passed on by human programming (or accidental mutation & subsequent self-perpetuation? *). The "self" becomes the source of much aggravation when selves assert themselves over other selves. Once again, we have a problematical issue tracing back to and revolving round the very human tendency to over assert the self at the cost of other selves as it seeks self-esteem, ambition, status & domination. In a social context the self has the potential to generate conflict. In human society a selfish identity-based teleology rules OK - if it can.  As the saying goes "Sin" is the word with the "I" in the middle. But the Christain answer is not to extinguish the self but to bring it under control, to deny self when other selves can be adversely affected by one's self-assertion.  (Philippians 2:1-11). 

BBCHe added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have. "We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

MY COMMENT: This data sharing idea is valid. In fact, that is exactly what humans have themselves done in spreading of technological know-how via word of mouth and information media. Clearly this shared information will be so much more than anyone person can know, but we don't lose sleep over that because it is in the public domain and in the public interest; it is as it were off-line information available to all should access be required. In this sense there are huge continents of information available on the internet. Here the notion that that information is the property belonging to someone or something is a strained idea. Therefore, in what sense wouldn't the information gained by 10,000 webbots also be my knowledge? If we are dealing with a public domain system this is just what technology has always been: Viz: a way of extending human powers. Energetically speaking a car is so much more powerful than a human but the human is in the driving seat, and it is the driver, and not the car, who has a strong sense of individual identity and ownership over the vehicle. Likewise, I have an extensive library of books containing much information unknown to me, although it is in principle available to me should I want to interrogate this library using its indices. It would be even better if all this information was on computer, and I could use clever algorithms to search it, and even better if I could use a chatbot; this would extend my cognitive powers even further.  But such clever mechanical informational aids don't necessarily mean they also simulate a strong sense of self; all we can say at this stage is that they are testament to the ability of human beings to extend their powers technologically, whether those powers pertain to mental power or muscular power. 

However, I would accept that it may well be possible to simulate computationally a strong sense of self And sagain, Hinton's diffidence &/or fear that digital systems can know so much more than any one person only has serious implications if that knowledge is attached to an intelligence (human or otherwise) which has a strong sense of personal identity, ownership and a strong motivation toward self-betterment over and against other selves. Since information is power, the hoarding and privatization of such information would then be in its (selfish) interests.  Only in this context of self-identity does the analogy of a large public library staffed by helpful slave assistants breakdown. Only in this context can I understand any assumption that the knowledge belongs to one AI self-aware identity. This very human concept of personal ambition & individual identity appears to be behind Hinton's fears although he doesn't explicitly articulate them. With AI it is a very natural to assume we are dealing with a self-aware self; although that need not be the case: It is something which has to be programmed in. 

If there is a powerful sense of individual identity which wishes to hoard knowledge, own it and privatize it that sounds a very human trait and if this sense of individualism and property was delegated to machinery it is then that fears about AI may be realized.  But until such happens AI systems are just an extension of human powers and identity. 

Let's also recall where chatbot information is coming from: it's largely coming from the texts of human culture: Those texts contain errors and naturally AI systems will inherit those errors. An AI system can only be as intelligent as its information environment allows. Moreover, as we live in a mathematically chaotic reality it is unlikely that AI will achieve omniscience in terms of its ability to predict and plan; It is likely then that AI, no more humanity, will be able to transcend the role of being a "make-it-up-as-you-go-along" complex adaptive system. 

BBCMatt Clifford, the chairman of the UK's Advanced Research and Invention Agency, speaking in a personal capacity, told the BBC that Dr Hinton's announcement "underlines the rate at which AI capabilities are accelerating". "There's an enormous upside from this technology, but it's essential that the world invests heavily and urgently in AI safety and control," he said.

Dr Hinton joins a growing number of experts who have expressed concerns about AI - both the speed at which it is developing and the direction in which it is going.

'We need to take a step back' In March, an open letter - co-signed by dozens of people in the AI field, including the tech billionaire Elon Musk - called for a pause on all developments more advanced than the current version of AI chatbot ChatGPT so robust safety measures could be designed and implemented.

Yoshua Bengio, another so-called godfather of AI, who along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning, also signed the letter.

Mr Bengio wrote that it was because of the "unexpected acceleration" in AI systems that "we need to take a step back".

But Dr Hinton told the BBC that "in the shorter term" he thought AI would deliver many more benefits than risks, "so I don't think we should stop developing this stuff," he added.

He also said that international competition would mean that a pause would be difficult. "Even if everybody in the US stopped developing it, China would just get a big lead," he said.

Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed "with a lot of thought into how to stop it going rogue".

MY COMMENT: I think I would  largely agree with the foregoing. The dangers of AI are two fold:

1. AI, like all other human technology, is an extension of human powers and it is therefore capable of extending the powers of both good and bad actors: The latter, is a social problem destined to be always with us.

2. Those human beings, who are effectively creating AI in their own image may create AI systems with a sense of self with the goal of enhancing their persona where self-identity will take precedence over all other goals. 

My guess is that the danger of AI going rogue and setting up business for its own ends is a lot less likely than AI being used to extend the powers of bad human actors. Nevertheless, I agree with Hinton that we should continue to develop AI, but be mindful of the potential pitfalls. Basically, the moral is this: read Hinton's "with a lot of thought into how to stop it going rogue" as "with a lot of thought into how to stop it becoming too dangerously human and at the disposal of bad actors". The danger is endemic to humanity itself and the irony is that the potential dangers exist because humans create AI in their own image and/or AI becomes an extension of the flawed human will. Thus Hinton's fears are grounded in human nature, a nature that all too readily asserts itself over and above other selves, a flawed nature that may well be passed on to AI systems built in the image of humanity with the same old goals of adapting and preserving an individual identity. Christianity calls those flaws in human nature "Sin", the word with the "I" in the middle. We all have a sense of individuality and a conscious sense of self: That individuality should not be extinguished, but when called for self should be denied in favour of other selves in a life of service (Phil 2:1-11).


Footnote:

* Venn diagrams don't have the facility to form a set of sets. However, this can be achieved using another "higher level" Venn diagram; we thus have Venn diagrams that are about other Venn diagrams. See here:

http://quantumnonlinearity.blogspot.com/2016/05/the-thinknet-project-footnote-on-self_11.html

** Epistemic difficulties surrounding accessibility and signal opacity loom large in historical research. "Epistemic distance" is a big issue in human studies. 

Another BBC link: