Tuesday, June 26, 2018

Artificial Intelligence, Thinknet and William Dembski

If I'm right then those anticipated AI enabled super intelligences with huge AI resources
 and information at their disposal have arrived and they are walking our streets already!

I was intrigued to see a blog post by Intelligent Design guru William Dembski where he gave his opinion on artificial intelligence. This post can be found here:

I have always thought of Dembski as a nice Christian bloke, with some useful ideas. But he is a person who, in the minds of many, has been too closely linked with a vociferous right-wing anti-atheist anti-academic establishment Christian culture, a culture which I suppose in some ways is an function of the left-right polarization in US politics and culture. This may explain why a Mr. Nice Guy like Dembski has been unpleasantly abused by atheists.

Dembski's work has shown that the creation of life, whether via evolution or other means, requires a huge burden of information and this conclusion is undoubtedly correct (see here). But then even atheists like PZ Myers and Joe Felsenstein will tell us that evolution is not a purely random process (see here , here and here) and therefore by implication must somehow be tapping into a high level of background information. Moreover, Dembski's ideas, on his own admission, don't directly contradict standard evolution provided one admits that for standard evolution to work, it must be exploiting a priori information resources (See here).

But it wasn't just atheists who were to abuse Dembski. In 2016 (or thereabouts) Dembski was given his marching orders from South Western Baptist Theological Seminary (See here). He had proved too conservative for the "liberal" evangelicals at Baylor university and now too "liberal" for the fundamentalists. Of late he has been somewhat out in the wilderness, neither comfortably fitting in with liberal evangelicalism nor fundamentalist evangelicalism. There may also be signs that even the de-facto IDists aren't too happy with him. Fundamentalist theme park manager Ken Ham has waded into the anti-Dembski fray with his inimitable line of spiritual abuse; in Dembski's own words Ham "went ballistic" at some of Dembski's theology (But then Ham is in a constant ballistic state about Christians who don't agree with him ). True, some of Dembski's theology does seem a little novel but he's a brave and radical thinker who is prepared to stick his neck out and take the risks entailed by mooting new ideas; the person who doesn't make mistakes, doesn't make anything. Seek, reject and select; the subject of searching & rejecting, after all, is one of Dembski's major specialisms! Being in the trackless wilderness where you have little to loose can give one that edgy intellectualism.

However, the trail I personally have been lead down regarding the role of intelligence in our cosmos is a little different to that of Dembski's. I would characterize myself as an explorer of the notion of Intelligent Creation as opposed to intelligent design. Intelligent Creation eschews the IDist's default dualism which tends to revolve around a "natural forces" vs "intelligent agency" dichotomy. Intelligent Creation regards the processes of life creation as a form of cognition in action. In contrast de facto ID tends to treat intelligent processes as inscrutable, almost sacred ground and explicitly states that the exact nature of intelligence is beyond its terms of reference. In fact some IDists will say that the science of ID is largely about "design detection" and beyond this it has little to say. (See here). This policy, I think, in large part traces back to William Dembski's explanatory filter. This filter results in an epistemic which I have reviewed critically (See here). Roughly speaking this filter means that for the IDist "intelligence design" is a kind of default explanation whereby it is considered to be "detected" when the supply of all other ("natural") explanations is exhausted.

One of the consequences of Dembski's epistemological heuristic is that IDists have a tendency to regard the details of intelligence as beyond the pale of human reason and science; they think their work to be complete once intelligence agency has been identified and proceed little further. Another consequence is that they tend to see the input of intelligence as taking the form of injections of information that punctuate natural history, the origins of which are an enigma. Beyond claiming that these information injections are sourced in some inscrutable entity identified as "intelligence" de facto IDists are disinclined to comment further. Given all this it is no surprise that some IDists have welcomed Roger Penrose's proposal that intelligence, at least of the human kind, is an incomputable process. (See comment 4 on this Uncommon Descent post). If this is true then human intelligence has got its work cut out if it is aiming at self-understanding.

Although I think IDists should be trying harder to understand the nature of intelligence, nevertheless I would be the first to concede that human level intelligence (and beyond), either because of its sheer complexity or for other unidentified fundamental reasons, may be beyond human understanding. Since the target of human understanding is our own human intelligence we are, of course, talking about self-understanding; this is one of those intriguing self-referencing scenarios.  But in spite of a possible barrier to human self-understanding, in my Thinknet Project I'm proceeding under the working assumption that it is possible for the human mind to at least make some significant inroads to self-understanding, even if perhaps it proves it is not possible to arrive at a full understanding. However, although I'm very non-committal about how far my work on the natural process of human thinking can take me, it is very likely that the average IDist would look askance at this work because of de facto ID's commitment to the sacred and almost inscrutable role intelligence plays in their epistemic paradigm. Moreover, given the polarized state of the pan-Atlantic debate on the intelligent design question it may be that the average right-wing leaning IDist & borderline fundamentalist would see me as a lackey of the totally depraved academic establishment!

I was intrigued to see that in a digression in his post Dembski, unlike some other IDists,  rejects the idea that the human ability to understand Godel's incompleteness theorem suggests that human cognition is  an incomputable process. Dembski's digression may bear some resemblance to my own reasons for rejecting Godel's incompleteness theorem as proof that human cognition is an incomputable process. (See here for my reasoning). I won't take Dembski's digression any further here as I'm more interested in his belief that AI is unlikely to reach a human level any time soon. And that is what I will look at here.


In his post Dembski says this:

It would be an overstatement to say that I’m going to offer a strict impossibility argument, namely, an argument demonstrating once and for all that there’s no possibility for AI ever to match human intelligence. When such impossibility claims are made, skeptics are right to point to the long string of impossibilities that were subsequently shown to be eminently possible (human flight, travel to the moon, Trump’s election). Illusions of impossibility abound.

Nonetheless, I do want to argue here that holding to a sharp distinction between the abilities of machines and humans is not an illusion of impossibility.

I like Dembski's approach here: Unlike some of the right-wing IDists Dembski is tentative and he isn't burning his bridges: He's not arguing for the outright impossibility of human level AI. He's going for the weaker thesis that on the current evidence AI looks to be of a very different quality to human intelligence. On that weaker thesis, I have to say,  I probably agree with him. 

The argument Dembski develops goes something like this: Up until now all AI has been adapted to particular well defined problem areas, problem areas that are relatively easy to reduce to closed ended searches. But human intelligence is not restricted to the solution of well defined problems, problems specified in advance. Moreover, human intelligence cannot be modeled as an extensive library of searches managed and appropriately selected to match the problem in hand by some higher level of intelligence which Dembski identifies as a "homunculus" skilled at searching for the right search. For it seems that in human intelligence we are looking for a much more open ended ability, an ability which has the competence to move between problem areas which haven't been defined in advance and construct the search needed to solve the problem in hand. Originality, novelty and imagination are the watchwords for human intelligence. It is the competence to deal with this open endedness, something human beings are good at, which seems to be missing in current AI.

The general drift of Dembski's argument is probably correct. The highly generic catch-all nature of human level intelligence isn't found in an ability to rapidly traverse a well defined search space, but to make inroads to problems not yet conceptualized and this sets it apart from current AI. If I'm reading Dembski correctly then he saying that little progress has been made on the problem of being able to handle the generic problem. This looks to be true to me. And yet Dembski is characteristically and rightly tentative about his conclusion; for on the question of being able to program a general problem solver he says this (My emphasis):

Good luck with that! I’m not saying it’s impossible. I am saying that there’s no evidence of any progress to date. Until then, there’s no reason to hold our breath or feel threatened by AI. 

So, Dembski isn't saying that it is impossible to solve the problem of the generic problem solver, the solver who can solve cold problems; rather he is telling us that progress on this question isn't very noticeable. That question can in fact be expressed succinctly and equivalently as follows: Can the human mind understand its own processes of understanding? If so, then AI aficionados are in with a chance. But, as I once heard one AI expert say, our current attempts to create human level AI may be like a monkey climbing a tree and thinking it has made progress in getting to the moon; the sheer complexity of human cognition may yet confound the attempt. But then if the human mind is that complex, perhaps it is complex enough to wrap its mind around its own complexity!

The fact is we do not know whether the human mind is equipped to fully understand itself or not. It follows, then, that we don't know whether human level AI is attainable by human intellectual endeavor. Dembski. himself doesn't claim to be able to answer this question absolutely either way; he's just saying that a heck of a lot more work needs to be done if indeed this problem is humanly solvable. In that he's probably right.

But if the barrier to the creation of human level AI is sheer quantitative complexity (as opposed to some hidden in-principle impossibility) which requires centuries of work, we've got to make a start at some point and we may as well start now......


To this end my own attempts to address the AI question can be found in my Thinknet project. This project revolves around the gamble that associationism is the fundamental language with the potential to cover and render the whole of reality. This working assumption is in part based on my experience with my Disorder and Randomness project and the item-property model I employ there and which I picked up and used in my connectionist Thinknet Project.

My tentative thesis is that all search problems, or at least a wide range of search problems, can be translated into my item-property interpretation of connectionism and the searching of an association network. But having said that it is clear that to use this basic idea to build human level AI may still entail all but unfathomable construction complexities just as the relatively simple and elemental  periodic table is the first rung of the ladder on the incredible chemical complexity of life. Something of this potential complexity in cognition is glimpsed in part 4 of the Thinknet project 

And there is of course the big question of human consciousness. I hold the view that even the most sophisticated silicon-chip simulation of human neural-ware would not be conscious anymore than a near-perfect flight simulator can be counted as a flying aircraft. This is a point that philosopher John Searle makes with his Chinese room and beer-can arguments.  And yet I'm not persuaded by dualist ghost-in-the-machine ideas. I am much more inclined to believe that something about the way neural chemistry uses the fundamentals of matter has the effect of introducing conscious qualia into its otherwise formal cognitive structures.  In comparison silicon chip based computers, even though they may have the potential of simulating correctly much of the formal structure of human cognition, do not exploit the material qualities (as opposed to the formal structures) needed to bring about conscious cognition. I don't think I'm being very original in suspecting that those qualities have got something to do with the wave-particle duality of matter - something that the current "beer-can" paradigm of computing does not exploit.

So, if I am right then to create conscious human level AI one really needs to essentially copy the God-given cognitive technology we already have to hand and which is open to close scrutiny - namely, human (and animal) cognition. But if we did this and created human level AI it would really no longer classify as AI; it wouldn't be just a silicon-chip copy of the formal structure of human cognition, but also a copy of the material qualities of human cognition and therefore classify as RI - that is Real Intelligence.  Dembski himself may also be thinking along these lines when he says this:

Essentially, to resolve AI’s homunculus problem, strong AI supporters would need to come up with a radically new approach to programming (perhaps building machines in analogy with humans in some form of machine embryological development).

So, if and when we succeed in building human level AI we would have come full circle - we would effectively have manufactured a biological intelligence and reinvented the wheel. Yes, mimicking the right formal structures of Real Intelligence is a necessary condition for RI, but not a sufficient condition. For RI also requires those formal structures to be made of the right material qualities, qualities supplied by no lesser than the Almighty Himself. Matter, so called "natural" matter, is in fact a miracle - in a sense it is "supernatural" and therefore it is no surprise that it has some remarkable properties; such as the ability to support conscious cognition.

OK so let's assume that in the long run at least, we learn how to create beings of conscious cognition. Is it possible that they may have an intelligence that far exceeds our own?  Perhaps, but then there may be limits to the generic problem solving cognition which makes use of the item-property connectionist model (The IPC model). After all, for the IPC model to work as a problem solver it must first compile reality into the general cognitive language of the association network; this can only be done via a lengthy process of experience and learning. Like our selves an IPC "machine" would have to go through this learning process. Now, a learning process is only as good as the epistemology our world supports and only as good as the data that that world feeds learning. For example, a completely isolated super-intelligent IPC machine would learn very little and remain a super-intelligent idiot. Thus our budding super-intelligence can only learn as fast as the external world supplies data and that may not be very fast. Moreover, the data may have biases, misrepresentations and errors in it. To deal with this the super-intelligent IPC machine may have built into it some seat-of-the-pants hit-and-miss heuristics that in the wrong circumstances come out as spectacularly inappropriate biases. This all sounds very human to me. And while we are on the subject of the human, it may be that in our IPC machine we have to duplicate other advantageous human traits such as social motivations and community heuristics which would allow an IPC machine to benefit from community experience. But that comes with trade-offs as well: In particular, limits inherent in the data supplied by the community: Outsourcing cognitive processing to other members of a community comes with hazards: Perhaps our gregarious super intelligent machine may become a young earthist, Jehovah's witness or a flat earther! The take-home lesson here is that the epistemics of our world is such that a complex adaptive system like an IPC machine will necessarily have to engage in compromises and trade-offs inherent in seat-of-the-pants heuristics.

Another thing to bear in mind is that intelligent performance is a multidimensional quantity: There are different kinds of intelligence.  Some individuals can exceed in task X and yet be poor in task Y. This may be because it is not logically possible to be both good at X and Y; these tasks may have a logical structure which entails a conflict.

In summary: It may well be that our world puts limits on how intelligent an IPC machine can be.  In any case an IPC machine is likely to be slow because it takes time to render the world in terms of its highly generic associative language. And its store of knowledge will only develop as fast as learning allows; for this learning may be hamstrung by the rate at which our world supplies data.


If I may indulge in a little speculative futurology then my guess is that whilst successful human engineered RI  is possible (and will likely have the touch and feel of human thinking) there will be no AI take over. Rather what will happen is that humans (and perhaps human engineered IPCMs) will become more and more integrated with an extensive toolbox of specialized conventional AI applications.

Human intelligence (and perhaps even engineered IPCMs) will become the humunculus that Dembski talks of; in short AI will serve as an increasingly integrated extension of the human mind.  In many ways this is already happening. The average human who has access to a web connected desk top computer or an iPhone has at his disposal potentially huge information and AI resources. And maybe one day web connectivity will be directly connected to the brain surgically!  Web apps will do slave tasks that IPC cognition finds difficult because of the processing overheads inherent in the need to translate to the general IPC language.  But IPC will remain the core intelligence, effectively the managing humunculus spider in the middle of the web. If that "spider" happens to be a human engineered IPCM then I feel that necessarily it will display very human personal foibles and limitations as one would expect from what would essentially be a complex adaptive system placed in a world whose epistemic interface is such that it does always provide easy interpretation.

As far as processing and information resources are concerned the person connected to the internet has huge advantages over the disconnected mind; so much so that the connected person is effectively a genius in comparison!  For example, I don't use an iPhone and this means that when I'm out and about the bright teenager who walks past me in the street and who is a smart operator of his iPhone is, in comparison to myself, a super intelligent AI extended being! When he isn't viewing porn or playing computer games he can far exceed anything I can do intellectually! In one sense that AI super intelligence has arrived!

No comments: