Pages

Saturday, August 01, 2015

The Thinknet Project. Footnote: Incomputability and the Biological Mind

Sir Roger Penrose on the nature of conscious cognition

This present post is really a footnote to my Thinknet series. In that series I am exploring a computerised simulation of connectionism in the hope that it might throw light on the subject of intelligence/mind. So having embarked on a series investigating the nature of thinking with the foundational assumption that intelligence is a process that can be simulated using algorithms, at the end of part 1 I asked the following question:

There was another question that was waiting for a rainy day: Where does Penrose’s work on mental incomputability fit into all this, if at all?

In his books The Emperor’s New Mind and Shadows of the Mind Roger Penrose suggests that incomputability is a necessary condition (but presumably not a sufficient condition) for conscious cognition. That is, according to Penrose what sets apart human intelligence from algorithmic intelligence is that the former somehow exploits incomputable processes. But not everyone is happy with Penrose’s proposal. For example Wiki quotes Marvin Minsky :

Marvin Minsky, a leading proponent of artificial intelligence, was particularly critical, stating that Penrose "tries to show, in chapter after chapter, that human thought cannot be based on any known scientific principle." Minsky's position is exactly the opposite – he believes that humans are, in fact, machines, whose functioning, although complex, is fully explainable by current physics. Minsky maintains that "one can carry that quest [for scientific explanation] too far by only seeking new basic principles instead of attacking the real detail. This is what I see in Penrose's quest for a new basic principle of physics that will account for consciousness."

I wouldn’t say that I’m too chuffed myself about Penrose’s proposal; if it is true then it puts the blocks on all attempts to find algorithmic simulations addressing the problem of intelligence that use standard connectionist models of the mind such as we see in De Bono’s The Mechanism of Mind. Penrose is also challenging positions like that of John Searle who believes that whilst conscious cognition is algorithmic, algorithmics alone is not a sufficient condition for it: For according to Searle a sufficient condition for conscious cognition is that its algorithms are realised in biological qualities. Therefore constructing a formally correct structure of say “beer cans” (as Searle puts it) would only amount to a simulation and not the-thing-in-itself So, although Searle believes that conscious cognition requires a specialist biological ontology he nevertheless believes that that ontology has an algorithmic formal structure. Even if one should propose that the human mind is some kind of quantum computer (in itself a radical proposal that is very controversial) human thinking would still classify as a classically algorithmic process and that simply doesn’t go far enough for Penrose!  Penrose is nevertheless making a serious proposal that needs evaluating.

So what then is incomputability? In his book Algorithmic Information Theory, Gregory Chaitin (following Turing) has a back-of-the-envelope proof of the existence of incomputable numbers….. see the picture below:

Chaitin's proof of incomputable numbers (Click to enlarge)

As Chaitin notes above Turing’s halting theorem and Godel’s incompleteness theorem follow on as corollaries from his Cantor diagonal slash proof of the existence of incomputable numbers (See picture above).

In his proof Chaitin allows his computations to generate numbers of indefinite length. For example, he permits an obviously computable periodic number such 0.123123123123…etc which disappears off into the infinite distance. Chaitin also doesn’t put a limit on the number of possible computations (or algorithms) that might exist. However, in practice we know that physical limits constrain both the size of the numbers an algorithm can generate and the number of possible algorithms. So let us assume that practical constraints imply that both the number of algorithms and the number of digits generated have roughly the same value and let’s call that value n. This value n will give us a square of n x n of digits. The finite computable numbers that form the rows of this square can be thought of as configurations of digits. Clearly then the n x n square contains n configurations of digits of length n.  But Cantor’s diagonal slash method shows that it is very easy to construct a configuration that doesn’t appear in this rather limited set of practically computable numbers and the reason for this are also fairly obvious – the square contains a mere n numbers, but of course given a configuration of length n it is actually possible to construct a much larger number of configurations and this number is quantified by an exponential of form:

 ~ A exp[kn]
(1)

….where it is clear that:
  A exp [kn] >> n
(2)

It follows then that there are far more conceivably constructible configurations of digits than there are practically computable configurations. Of course, we can continue to extend the value of n, but according to (2) the set of practically incomputable configurations increases in size much faster than n and this goes on into the infinite realm where we find  absolutely incomputable configurations.

The foregoing tells us that as n increases there is a spectrum of incomputability running from progressively impractical levels of computability right through to the absolute incomputability at infinite n. Given this spectrum it seems to me that Penrose’s proposal identifying absolute incomputability as a necessary condition of conscious cognition is a little arbitrary; for perhaps the condition for conscious cognition emerges at some finite point along the road to absolute incomputability? However, if we give Penrose the benefit of the doubt then according to Penrose incomputability of process is one of the conditions that sets apart biological intelligence (and which presumably applies to a wide range of animals such as primates, cats, dogs, dolphins etc) from classical algorithmic intelligence. In particular in his book “Shadows of the Mind” Penrose constructs what he believes to be a specific example demonstrating the human ability to think beyond computability; an analysis of his reasoning here is on my to do list. But for the moment suffice to say that the example he gives supporting his notion that humans have knowledge of the incomputable does not entirely convince me: Incomputable knowledge pertains to information about those complex and infinite incomputable patterns. But at no point in Penrose’s example did I feel I was party to anything other than knowledge of fairly regular patterns. Still, I really need to look at his reasoning more closely.

There is, however, one relatively prosaic way in which the human mind could tap into an incomputable process. In Edward De Bono’s book The Mechanism of Mind it is clear that in his models the action of thinking modifies the brain’s memories and conversely those memories effect the way thinking develops. So thinking effects memories and memories effect thinking – in short we have a feedback system here;  that is, nonlinearity is part of the brain’s natural processing. This raises the possibility of complex chaotic behaviour which in turn has the potential to amplify up the randomness of the quantum world.  Randomness, if understood in its absolute sense, is incomputable and it is possible (although never absolutely provable) that random quantum leaping delivers incomputable configurations to our world. If the mind is chaotically unstable enough to tap into these random configurations its behaviour therefore becomes incomputable. In fact it is quite likely that this kind of common-or-garden incomputability is present in the brain anyway – it doesn’t require the mind to tap into some new and exotic incomputable physics or even for the mind to be a quantum computer. This kind of incomputability, however, is clearly not a sufficient condition for conscious intelligence: After all, the asteroid Hyperion is tumbling chaotically and may well be sensitive to random quantum fluctuations, but that doesn’t make it conscious!

Although I’m not particularly comfortable with Penrose’s proposal it is, nonetheless, worth entertaining as a possibility. The fact is we still don’t really know just what are the sufficient physical conditions (i.e. the conditions as observed by the third person) which entail the presence of that enigmatic conscious first person perspective of biological brains. If on the other hand Penrose is wrong and incomputability is not a necessary condition for the first person perspective then as Penrose points out in his book Shadows of the Minds it then becomes possible in-principle to at least simulate human intelligence using a sufficiently powerful algorithmic computer. For those who reject the first person as a valid perspective then a simulation of this kind, which would presumably be thorough enough to pass the Turing test, classifies as fully sentient by definition. It is true that nowadays computerised simulations of all sorts are looking increasingly realistic and one can imagine improvements to such an extent that these simulations could fool a lot of the people a lot of the time. But there remains a deep intuition that appearances, no matter how good, are never logically equivalent to a genuine first person perspective, a perspective that is otherwise inaccessible from the third person perspective.

AI expert Marvin Minsky would prefer to think that the quest for an understanding of human sentience is purely a question of understanding its formal complexity and that the ontology which reifies that formal complexity is not relevant. Although I am unsure about Penrose’s proposal that incomputability is a necessary condition of sentience I nevertheless agree with his feeling that the existence of the kind of conscious cognition which is at the heart of practical levels of intelligence points to something about our world which we don’t yet fully understand. That I’m at one with Penrose on this question is indicated by the following quote taken from a post where I criticise IDist Vincent Torley:

 Stupidly, to my mind, Torley even claims that an intelligent agent might be insentient: To date this seems highly implausible. A designing intelligence would have to be highly motivated; our current understanding is that all highly motivated goal seeking complex systems are the seat of a motivating sentience: For example, it is a form of irrational solipsism to suggest that the higher mammals are anything other than conscious/sentient. Notably, Roger Penrose book "Shadows of the Mind" is based on the premise that real intelligence and consciousness go together.

Following John Searle I’m inclined toward the conclusion that intelligence is not just its formal structure: Viz: A correctly arranged formal structure made of beer cans wouldn’t be a practical manifestation of intelligence – in fact no simulation would be practical, any more than we would expect an aircraft simulator to actually fly! It follows then that we never could have the insentient intelligence that Torley speaks of because any real practical level of intelligence would need to exploit whatever processes deliver sentience and conscious cognition. It is for this reason that I believe we should respect high levels of biological intelligence in all its forms; dogs, cats dolphins primates, elephants and, in my opinion, also human embryos. .

Be all that as it may, there is one thing I am fairly sure about: In the connectionism sketched out by Edward De Bono in his Mechanism of Mind we are starting to see an uncovering of some of the important principles required for intelligence, although I guess there is whole lot more to discover: As physicists are all too aware, the disconnect between gravity and quantum mechanics is a sure sign that our knowledge of the world about us is still very partial.

No comments:

Post a Comment