Pages

Monday, August 17, 2015

The Thinknet Project: Footnote 2: Incomputability and the Biological Mind

 (Click to enlarge)

The above is a scan taken from Roger Penrose’s book “Shadows of the Mind”.  In this book he claims to have found a mathematical example based on The Halting Theorem proving human beings are capable of grasping otherwise incomputable truths. The pages above contain the core of his argument.

Penrose starts by defining Cq(n) – this notation represents the qth computation acting on a single number n. Penrose then considers an algorithm A(q,n) which takes two parameters q and n, where, as I’ve already said, q designates the qth computation and n is the input parameter for this computation. The algorithm A(q,n) is stipulated to be soundly devised in such a way as to halt if Cq(n) does not stop.  However, as Penrose goes on to show this sweeping stipulation entails a contradiction that not only demonstrates the limits of algorithmics (as per the well-established Turing halting theorem) but also the ability of the human mind to transcend algorithmic limitations (a conclusion that is highly controversial).

In order to show this Penrose considers A(n,n): If A works correctly (which we assume it does) then A(n.n) will halt if the nth computation, when seeded with number n, does not halt.  Penrose then argues that A(n,n) is a computation that depends on just one number, and therefore it must be one of the Cq.  So let’s suppose that A(n,n) is the computation Ck.  That is:

A(n,n) = Ck(n).

Now n is a variable so let us put n = k.   Hence:

A(k,k) = Ck(k).

This latter expression clearly entails a contradiction: For if we insist that A(k, k) will halt if  Ck(k) does not halt, then because Ck(k) is actually the same as A(k,k),  A(k,k) must halt if A(k, k) doesn’t halt!  One way of resolving this contradiction is to relax the original stipulation that A(q,n)  is an algorithm which successfully determines in all cases that Cq(n) will not halt; thus we conclude that Ck(k) is one of those algorithms on which A(q,n) is unable to deliver a result: In other words A(k, k) will not halt when it is applied to Ck(k). The obvious corollary is that because A(k, k) is actually the same as Ck(k) then this means that Ck(k) does not halt either.

The foregoing is a version of Turing’s famous halting theorem; namely, that there exists no general algorithm that can solve the halting problem. However, just over on the next page from the above scan (i.e page 76) Penrose takes things a little further. He concludes that because we, as humans, can see that Ck(k) doesn’t stop whereas A(k, k) is unable to ascertain this then:

…. since from the knowledge of A and of its soundness, we can actually construct the computation Ck(k) that we can see does not ever stop, we deduce that A cannot be a formalisation of the procedures available to mathematicians for ascertaining that computations do not stop, no matter what A is. Hence: Human mathematicians are not using a knowably sound algorithm in order to ascertain mathematical truth

Penrose goes on to propose that the human mind isn't using a computable function at all.

Firstly, before considering whether Penrose is right or wrong let me ask this question: Is this good news or bad news? I think it depends on your interests. If you have a belief in the mystical superiority of the human mind over computers it will suit you down to the ground. If you don’t have a stake in algorithmic AI then you’re probably not too bothered. But if you do have a large stake in algorithmic AI it’s probably all a bit upsetting because it means there is little point in attempting to understand human levels of intelligence in algorithmic terms, even if those terms involve new (computable) physics: For according to Penrose the biological mind reaches into the algorithmically unreachable realm of incomputability. In passing let me just say that for myself I’ve always been rather drawn to a position similar to that of John Searle: That is, the human mind (and probably other minds with a similar biological ontology like cats and dogs etc.) possesses the irreducible first person perspective we call consciousness; but this view in and of itself doesn’t necessarily deny that a passable algorithmic simulation of the mind (i.e., as seen by the third person) could be created if we had sufficient understanding of the algorithmic formal structure of the brain. I have to confess that through my Thinknet Project I do have a bit of stake in algorithmic AI.

But vested interests apart is Penrose right or wrong in his conclusion?

One way of thinking about this contradiction that Penrose uses to derive his version of the halting theorem is to imagine that algorithm A is a system reified on some kind of mechanical ontology, like a computer.  The system puts up a flag when it finds an algorithm with a particular property P; in this case P = doesn’t halt. But there is an obvious problem when A tries to do this for itself: When this happens, in the very act of trying to flag property P algorithm A then violates property P! In effect when A attempts to flag P it's just like one of those sentences that tries to talk about itself in a contradictory way e.g. “This sentence is false”. This kind of conceptual feedback loop opens up the liability of contradiction; that is, in talking about itself A invalidates the very self-properties it is trying to describe.  The way round this internal contradiction is to assume that A cannot have certain items of knowledge about itself; therefore we conclude that A is unable to flag the kind of self-properties that lead to contradiction. This in as an example of the general Gödellian conclusion that for a system like A there exists a class of conclusions about itself that it cannot formally ascertain without contradiction.

But seemingly, as Penrose has shown, this result doesn’t prevent us knowing, for instance, things about A that A can’t know about itself; like, for example, whether or not it stops on certain inputs. Does this mean then that human thinking is non-algorithmic as Penrose concludes?  I suggest no; and I say this because before we start proposing that humans have access to incomputable mathematical truths there is potentially a less esoteric solution to Penrose’s conundrum as I shall now try to explain:

Penrose’s conclusion doesn’t necessarily follow because, I submit, it simply means that the human algorithmic system, let’s call it H, is reified on an ontology beyond and outside of A. That H can flag properties in A without getting itself into the kind of contradictory loop which is raised when A starts talking about itself is, I propose, a result of H running on a different ontology and is not necessarily down to some incomputable power of H. The flags that H raises about the properties of A are not reified on the medium of A’s ontology and therefore conflicts cannot arise whereby A, in the very act of flagging knowledge about itself, has the effect of negating that knowledge. Because H is ontologically other than A flags raised in H can in no way effect the properties of A.

In fact in discussing his conclusion Penrose actually shows some awareness of the out-sidedness of H and considers the case where us human outsiders can think of ways of creating an algorithm that is able to determine that Ck(k) doesn’t stop.

A computer could be programmed to follow through precisely the argument that I have given here. Could it not itself, therefore, arrive at any conclusion that I have myself reached?  It is certainly true that it is a computational process to find the particular calculation Ck(k), given algorithm A. In fact this can be exhibited quite explicitly…… Although the procedure for obtaining Ck(k) from A can be put into the form of a computation, this computation is not part of the procedures contained in A. It cannot be, because A is not capable of ascertaining the truth of Ck(k)….

This is an admission that there are algorithmic ontologies beyond A that can talk about A without facing the contradictions that circumscribe A when it tries to talk about itself. So before resorting to exotic ideas about the incomputable it may be that the more prosaic reason of a distinction of ontology explains why humans apparently know more than A; in fact this is what I am going to propose. This proposal, of course, doesn’t do away with the ultimate applicability of Godel’s and Turing’s conclusions because we simply find that these conclusions bite us humans too when we start to think about ourselves. For although it seems possible to create an algorithm that could embody our own understanding about A Penrose goes on to use what I refer to as the “superset trick” to show that contradictions ultimately must arise when any self-knowledge is sort for, human or otherwise. To this end Penrose envisages a robot that has been given information about the procedure for obtaining Ck(k) (My underlining):

Of course we could tell our robot that Ck(k) indeed does not stop, but if the robot were to accept this fact, it would have to modify its own rules by adjoining this truth to the ones it already ‘knows’. We could imagine going further than this and telling our robot, in some appropriate way, that the general computational procedure  for obtaining Ck(k) from A is also something it should ‘know’ as a way of obtaining new truths from old. Anything that is well defined and computational could be added to the robot’s store of ‘knowledge’. But we now have a new ‘A’, and the Gödel argument would apply to this, instead of the old ‘A’.  That is to say, we should have been using this new ‘A’ all along instead of the old ‘A’, since it is cheating  to change  our ‘A’ in the middle of the argument…..It is cheating to introduce another truth judging computational procedure not contained in A after  we have settled  on A as representing  this totality.

What I think Penrose is trying to say here is that any attempt to change A in order to circumvent the limits on the way A can talk about itself simply creates a new A which when applied to itself is either liable to the same old contradictions or must forever be forbidden certain kinds of self-knowledge. The “superset trick” that I referred to entails subsuming all such possible changes into a “super A” and Penrose rightly tells us that ultimately Turing’s halting theorem will bite this superset A.

But cheating is exactly what we can do if we are something other than the algorithmic system that is A and it is this ontological otherness which, I submit, is giving an apparent, albeit spurious, impression that our minds somehow transcend Godellian and Turing restrictions. We are ontologically distinct from Penrose’s robot and therefore we appear to be able to violate Godel and Turing; but this is true only when we are talking about an object that is other than ourselves. This distinction of ontology won’t rescue us when we start talking about ourselves; for ultimately Turing’s and Godel’s superset based conclusions will also bite when it comes to human self-knowledge: Ergo, when we talk about our own ontology there are certain things we cannot know without raising a contradiction. If these contradictions are not to arise with human self-knowing Turing and Godel incomputability must also apply to human ontology. In summary, then, the scenario considered by Penrose is not proof that human mental life has available to it incomputable knowledge; a better explanation in my view, is that Godel and Turing only apply when distinct ontologies attempt to self-know.

***

However, having said all that I must repeat and extend what I said in the the first part of this footnote: Human mental life is likely to be a non-linear process, thereby giving it a chaotic potential which could make it sensitive to what may be the incomputable patterns of quantum fluctuations. As I said in the first part, this non-linearity arises because thinking updates the memories on the thinking surface which in turn affects thinking, thereby effectively giving us a feedback loop with non-linear potential. But in his book “The Mechanism of Mind” Edward De Bono also identifies another way in which the mind may be non-linear in operation: He considers a model where one thinking surface has its activity projected as input onto another thinking surface which in turn has its thinking activity fed-back to the first surface. This scenario resembles the case where a video camera sends its output to a screen, a screen which is being videoed by the self-same camera. The feedback in both cases is likely to result in a chaotic pattern of evolution, an evolution sensitive to very minor fluctuations. This fairly prosaic way of postulating an incomputable cognitive process doesn’t even ask for new physics; although it does assume that quantum leaping is a literal process and a process that has incomputability at its heart.

So my conclusion is that whilst I don’t think Penrose has successfully shown that the mind is incomputable in his sense, mind nevertheless is potentially a means of delivering incomputable patterns to our world as a result of its sensitive feedback loops.


Penrose on Consciousness

Relevant links:

Thinknet series so far:

Melencolia I series

The Joe Felsenstein series

No comments:

Post a Comment