Pages

Monday, August 17, 2015

The Thinknet Project: Footnote 2: Incomputability and the Biological Mind

 (Click to enlarge)

The above is a scan taken from Roger Penrose’s book “Shadows of the Mind”.  In this book he claims to have found a mathematical example based on The Halting Theorem proving human beings are capable of grasping otherwise incomputable truths. The pages above contain the core of his argument.

Penrose starts by defining Cq(n) – this notation represents the qth computation acting on a single number n. Penrose then considers an algorithm A(q,n) which takes two parameters q and n, where, as I’ve already said, q designates the qth computation and n is the input parameter for this computation. The algorithm A(q,n) is stipulated to be soundly devised in such a way as to halt if Cq(n) does not stop.  However, as Penrose goes on to show this sweeping stipulation entails a contradiction that not only demonstrates the limits of algorithmics (as per the well-established Turing halting theorem) but also the ability of the human mind to transcend algorithmic limitations (a conclusion that is highly controversial).

In order to show this Penrose considers A(n,n): If A works correctly (which we assume it does) then A(n.n) will halt if the nth computation, when seeded with number n, does not halt.  Penrose then argues that A(n,n) is a computation that depends on just one number, and therefore it must be one of the Cq.  So let’s suppose that A(n,n) is the computation Ck.  That is:

A(n,n) = Ck(n).

Now n is a variable so let us put n = k.   Hence:

A(k,k) = Ck(k).

This latter expression clearly entails a contradiction: For if we insist that A(k, k) will halt if  Ck(k) does not halt, then because Ck(k) is actually the same as A(k,k),  A(k,k) must halt if A(k, k) doesn’t halt!  One way of resolving this contradiction is to relax the original stipulation that A(q,n)  is an algorithm which successfully determines in all cases that Cq(n) will not halt; thus we conclude that Ck(k) is one of those algorithms on which A(q,n) is unable to deliver a result: In other words A(k, k) will not halt when it is applied to Ck(k). The obvious corollary is that because A(k, k) is actually the same as Ck(k) then this means that Ck(k) does not halt either.

The foregoing is a version of Turing’s famous halting theorem; namely, that there exists no general algorithm that can solve the halting problem. However, just over on the next page from the above scan (i.e page 76) Penrose takes things a little further. He concludes that because we, as humans, can see that Ck(k) doesn’t stop whereas A(k, k) is unable to ascertain this then:

…. since from the knowledge of A and of its soundness, we can actually construct the computation Ck(k) that we can see does not ever stop, we deduce that A cannot be a formalisation of the procedures available to mathematicians for ascertaining that computations do not stop, no matter what A is. Hence: Human mathematicians are not using a knowably sound algorithm in order to ascertain mathematical truth

Penrose goes on to propose that the human mind isn't using a computable function at all.

Firstly, before considering whether Penrose is right or wrong let me ask this question: Is this good news or bad news? I think it depends on your interests. If you have a belief in the mystical superiority of the human mind over computers it will suit you down to the ground. If you don’t have a stake in algorithmic AI then you’re probably not too bothered. But if you do have a large stake in algorithmic AI it’s probably all a bit upsetting because it means there is little point in attempting to understand human levels of intelligence in algorithmic terms, even if those terms involve new (computable) physics: For according to Penrose the biological mind reaches into the algorithmically unreachable realm of incomputability. In passing let me just say that for myself I’ve always been rather drawn to a position similar to that of John Searle: That is, the human mind (and probably other minds with a similar biological ontology like cats and dogs etc.) possesses the irreducible first person perspective we call consciousness; but this view in and of itself doesn’t necessarily deny that a passable algorithmic simulation of the mind (i.e., as seen by the third person) could be created if we had sufficient understanding of the algorithmic formal structure of the brain. I have to confess that through my Thinknet Project I do have a bit of stake in algorithmic AI.

But vested interests apart is Penrose right or wrong in his conclusion?

One way of thinking about this contradiction that Penrose uses to derive his version of the halting theorem is to imagine that algorithm A is a system reified on some kind of mechanical ontology, like a computer.  The system puts up a flag when it finds an algorithm with a particular property P; in this case P = doesn’t halt. But there is an obvious problem when A tries to do this for itself: When this happens, in the very act of trying to flag property P algorithm A then violates property P! In effect when A attempts to flag P it's just like one of those sentences that tries to talk about itself in a contradictory way e.g. “This sentence is false”. This kind of conceptual feedback loop opens up the liability of contradiction; that is, in talking about itself A invalidates the very self-properties it is trying to describe.  The way round this internal contradiction is to assume that A cannot have certain items of knowledge about itself; therefore we conclude that A is unable to flag the kind of self-properties that lead to contradiction. This in as an example of the general Gödellian conclusion that for a system like A there exists a class of conclusions about itself that it cannot formally ascertain without contradiction.

But seemingly, as Penrose has shown, this result doesn’t prevent us knowing, for instance, things about A that A can’t know about itself; like, for example, whether or not it stops on certain inputs. Does this mean then that human thinking is non-algorithmic as Penrose concludes?  I suggest no; and I say this because before we start proposing that humans have access to incomputable mathematical truths there is potentially a less esoteric solution to Penrose’s conundrum as I shall now try to explain:

Penrose’s conclusion doesn’t necessarily follow because, I submit, it simply means that the human algorithmic system, let’s call it H, is reified on an ontology beyond and outside of A. That H can flag properties in A without getting itself into the kind of contradictory loop which is raised when A starts talking about itself is, I propose, a result of H running on a different ontology and is not necessarily down to some incomputable power of H. The flags that H raises about the properties of A are not reified on the medium of A’s ontology and therefore conflicts cannot arise whereby A, in the very act of flagging knowledge about itself, has the effect of negating that knowledge. Because H is ontologically other than A flags raised in H can in no way effect the properties of A.

In fact in discussing his conclusion Penrose actually shows some awareness of the out-sidedness of H and considers the case where us human outsiders can think of ways of creating an algorithm that is able to determine that Ck(k) doesn’t stop.

A computer could be programmed to follow through precisely the argument that I have given here. Could it not itself, therefore, arrive at any conclusion that I have myself reached?  It is certainly true that it is a computational process to find the particular calculation Ck(k), given algorithm A. In fact this can be exhibited quite explicitly…… Although the procedure for obtaining Ck(k) from A can be put into the form of a computation, this computation is not part of the procedures contained in A. It cannot be, because A is not capable of ascertaining the truth of Ck(k)….

This is an admission that there are algorithmic ontologies beyond A that can talk about A without facing the contradictions that circumscribe A when it tries to talk about itself. So before resorting to exotic ideas about the incomputable it may be that the more prosaic reason of a distinction of ontology explains why humans apparently know more than A; in fact this is what I am going to propose. This proposal, of course, doesn’t do away with the ultimate applicability of Godel’s and Turing’s conclusions because we simply find that these conclusions bite us humans too when we start to think about ourselves. For although it seems possible to create an algorithm that could embody our own understanding about A Penrose goes on to use what I refer to as the “superset trick” to show that contradictions ultimately must arise when any self-knowledge is sort for, human or otherwise. To this end Penrose envisages a robot that has been given information about the procedure for obtaining Ck(k) (My underlining):

Of course we could tell our robot that Ck(k) indeed does not stop, but if the robot were to accept this fact, it would have to modify its own rules by adjoining this truth to the ones it already ‘knows’. We could imagine going further than this and telling our robot, in some appropriate way, that the general computational procedure  for obtaining Ck(k) from A is also something it should ‘know’ as a way of obtaining new truths from old. Anything that is well defined and computational could be added to the robot’s store of ‘knowledge’. But we now have a new ‘A’, and the Gödel argument would apply to this, instead of the old ‘A’.  That is to say, we should have been using this new ‘A’ all along instead of the old ‘A’, since it is cheating  to change  our ‘A’ in the middle of the argument…..It is cheating to introduce another truth judging computational procedure not contained in A after  we have settled  on A as representing  this totality.

What I think Penrose is trying to say here is that any attempt to change A in order to circumvent the limits on the way A can talk about itself simply creates a new A which when applied to itself is either liable to the same old contradictions or must forever be forbidden certain kinds of self-knowledge. The “superset trick” that I referred to entails subsuming all such possible changes into a “super A” and Penrose rightly tells us that ultimately Turing’s halting theorem will bite this superset A.

But cheating is exactly what we can do if we are something other than the algorithmic system that is A and it is this ontological otherness which, I submit, is giving an apparent, albeit spurious, impression that our minds somehow transcend Godellian and Turing restrictions. We are ontologically distinct from Penrose’s robot and therefore we appear to be able to violate Godel and Turing; but this is true only when we are talking about an object that is other than ourselves. This distinction of ontology won’t rescue us when we start talking about ourselves; for ultimately Turing’s and Godel’s superset based conclusions will also bite when it comes to human self-knowledge: Ergo, when we talk about our own ontology there are certain things we cannot know without raising a contradiction. If these contradictions are not to arise with human self-knowing Turing and Godel incomputability must also apply to human ontology. In summary, then, the scenario considered by Penrose is not proof that human mental life has available to it incomputable knowledge; a better explanation in my view, is that Godel and Turing only apply when distinct ontologies attempt to self-know.

***

However, having said all that I must repeat and extend what I said in the the first part of this footnote: Human mental life is likely to be a non-linear process, thereby giving it a chaotic potential which could make it sensitive to what may be the incomputable patterns of quantum fluctuations. As I said in the first part, this non-linearity arises because thinking updates the memories on the thinking surface which in turn affects thinking, thereby effectively giving us a feedback loop with non-linear potential. But in his book “The Mechanism of Mind” Edward De Bono also identifies another way in which the mind may be non-linear in operation: He considers a model where one thinking surface has its activity projected as input onto another thinking surface which in turn has its thinking activity fed-back to the first surface. This scenario resembles the case where a video camera sends its output to a screen, a screen which is being videoed by the self-same camera. The feedback in both cases is likely to result in a chaotic pattern of evolution, an evolution sensitive to very minor fluctuations. This fairly prosaic way of postulating an incomputable cognitive process doesn’t even ask for new physics; although it does assume that quantum leaping is a literal process and a process that has incomputability at its heart.

So my conclusion is that whilst I don’t think Penrose has successfully shown that the mind is incomputable in his sense, mind nevertheless is potentially a means of delivering incomputable patterns to our world as a result of its sensitive feedback loops.


Penrose on Consciousness

Relevant links:

Thinknet series so far:

Melencolia I series

The Joe Felsenstein series

Friday, August 14, 2015

Deep Internal Contradiction

Dream on:  No red pills for us in our cosmos I'm afraid! We just have to try and work it out for ourselves!

I was interested to see a quote from Paul Davies in a post on the ID web site Uncommon Descent. The post was entitled Physicist Paul Davies’ killer argument against the multiverse (Date August 14th)

“If you take seriously the theory of all possible universes, including all possible variations,” Davies said, “at least some of them must have intelligent civilizations with enough computing power to simulate entire fake worlds. Simulated universes are much cheaper to make than the real thing, and so the number of fake universes would proliferate and vastly outnumber the real ones. And assuming we’re just typical observers, then we’re overwhelmingly likely to find ourselves in a fake universe, not a real one.”

So far it’s the normal argument.

Then Davies makes his move. He claims that because the theoretical existence of multiple universes is based on the laws of physics in our universe, if this universe is simulated, then its laws of physics are also simulated, which would mean that this universe’s physics is a fake. Therefore, Davies reasoned, “We cannot use the argument that the physics in our universe leads to multiple universes, because it also leads to a fake universe with fake physics.” That undermines the whole argument that fundamental physics generates multiple universes, because the reasoning collapses in circularity.

Davies concluded, “While multiple universes seem almost inevitable given our understanding of the Big Bang, using them to explain all existence is a dangerous, slippery slope, leading to apparently absurd conclusions.”

This line of argument has been noted before. The rest of this post is an extract I have taken from a blog post I wrote in February 2007. (See here). The context is the question of how the simulation argument impacts the possibility of time travel:

***

PAUL DAVIES: “The better the simulation gets the harder it would to be able to tell whether or not you were in a simulation or in the real thing, whether you live in a fake universe or a real universe and indeed the distinction between what is real and what is fake would simply evaporate away…..Our investigation of the nature of time has lead inevitably to question the nature of reality and it would be a true irony if the culmination of this great scientific story was to undermine the very existence of the whole enterprise and indeed the existence of the rational universe.”

Let me broach some of the cluster of philosophical conundrums raised by this embarrassing debacle that physics now faces.

Why should our concept of a simulated reality be applicable to the deep future? Doesn’t it rather presume that the hypothetical super beings have any need for computers? The existence of computers is partly motivated by our own mental limitations – would a super intelligence have such limitations? Or perhaps these simulating computers ARE the super intelligences of the future. But then why would they want to think of us primitives from the past? Another problem: Doesn’t chaos and the absolute randomness of Quantum Mechanics render anything other than a general knowledge of the past impossible? In that case this means that any simulated beings would in fact be arbitrary creations, just one evolutionary scenario, a mere possible history, but not necessarily the actual history. And overlying the whole of this simulation argument is the ever-unsettling question of consciousness: Namely, does consciousness consist entirely in the formal relationships between the informational tokens in a machine?

But even if we assume that the right formal mental structures are sufficient condition for conscious sentience, the problems just get deeper. If physics is a science whose remit is to describe the underlying patterns that successfully embed our observations of the universe into an integrated mathematical structure, then physics is unable to deliver on anything about the “deeper” nature of the matrix on which those experiences and mathematical relations are realized. Thus, whatever the nature of this matrix, our experiences and the associated mathematical theories that integrate them ARE physics. If we surmise that our experiences and theories are a product of a simulation, physics cannot reach beyond itself and reveal anything about its simulating context. The ostensible aspects of the surmised simulation (that is, what the simulations delivers to our perceptions) IS our reality: As Paul Davies observed, “… indeed the distinction between what is real and what is fake would simply evaporate away”. Moreover, if physics is merely the experiences and underlying mathematical patterns delivered to us by a simulation how can we then reliably extrapolate using that “fake” physics to draw any conclusions about the hypothetical “real physics” of the computational matrix on which we and our ‘fake’ physics are being realized? In fact is it even meaningful to talk about this completely unknown simulating world? As far as we are concerned the nature of that world could be beyond comprehension and the whole caboodle of our ‘fake’ physical law, with its ‘fake’ evolutionary history and what have you, may simply not apply to the outer context that hosts our ‘fake’ world. That outer realm may as well be the realm of the gods. Did I just say “gods”? Could I have meant … ssshh … God?

The root of the problem here is, I believe, a deep potential contradiction in contemporary thinking that has at last surfaced. If the impersonal elementa of physics (spaces, particles, strings, laws and what have you) are conceived to be the ultimate/primary reality, then this philosophy, (a philosophy I refer to as elemental materialism) conceals a contradiction. For it imposes primary and ultimate reality on physical elementa and these stripped down entities carry no logical guarantee as to the correctness and completeness of human perceptions. Consequently there is no reason, on this view, why physical scenarios should not exist where human perceptions as to the real state of affairs are wholly misleading, thus calling into question our access to real physics. Hence, a contradictory self-referential loop develops as follows: The philosophy of elemental materialism interprets physics to mean that material elementa are primary, but this in turn has lead us to the conclusion that our conception of physics could well be misleading. But if that is true how can we be so sure that our conception of physics, which has lead us to this very conclusion, is itself correct?

There is one way of breaking this unstable conceptual feedback cycle. In my youthful idealistic days I was very attracted to positivism. It seemed to me a pure and unadulterated form of thinking because it doesn’t allow one to go beyond one’s observations and any associated integrating mathematical structures; it was a pristine philosophy uncontaminated by the exotic and arbitrary elaborations of metaphysics. For example, a simulated reality conveying a wholly misleading picture of reality cannot be constructed because in positivism reality is the sum of our observations and the mental interpretive structures in which we embed them - there is nothing beyond these other than speculative metaphysics. However, strict positivism is counterintuitive in the encounter with other minds, history, and even one’s own historical experiences. In any case those “interpretative structures”, as do the principles of positivism, look themselves rather metaphysical. Hence, I reluctantly abandoned positivism in its raw form. Moreover the positivism of Hume subtly subverts itself as a consequence of the centrality of the sentient observer in its scheme; if there is one observer, (namely one’s self) then clearly there may be other unobserved observers and perhaps even that ultimate observer, God Himself. Whatever the deficiencies of positivism I was nevertheless left with a feeling that somehow sentient agents of observation and their ability to interpret those observations have a primary cosmic role; for without them I just couldn’t make sense of the elementa of physics as these are abstractions and as such can only be hosted in the minds of the sentient beings that use them to make sense of experience. This in turn lead me into a kind of idealism where the elementa of science are seen as meaningless if isolated from a-priori thinking cognitive agents in whose minds they are constructed. In consequence, a complex mind of some all embracing kind is the a-priori feature that must be assumed to give elementa a full-blown cosmic existence. Reality demands the primacy of an up and running complex sentience in order to make sense of and underwrite the existence of its most simple parts; particles, spaces, fields etc – these are the small fish that swim in the rarefied ocean of mind. This philosophy, for me, ultimately leads into a self-affirming theism rather than a self-contradictory elemental materialism.

The popular mind is beginning to perceive that physics has lost its way: University physics departments are closing in step with the public’s perception of physics as the playground for brainy offbeat eccentrics. My own feeling is that physics has little chance of finding its way whilst it is cut adrift from theism, and science in general has become a victim of nihilism. The negative attitude toward science, which underlies this nihilism, is not really new. As H. G. Wells once wrote:

"Science is a match that man has just got alight. He thought he was in a room - in moments of devotion, a temple - and that this light would be reflected from and display walls inscribed with wonderful secrets and pillars carved with philosophical systems wrought into harmony. It is a curious sensation, now that the preliminary splutter is over and the flame burns up clear, to see his hands lit and just a glimpse of himself and the patch he stands on visible, and around him, in place of all that human comfort and beauty he anticipated - darkness still."

Wells tragically lost his faith and with it his hope and expectation: He no longer believed the Universe to be a Temple on the grandest of scales, but rather a place like Hell, a Morlockian underworld with walls of impenetrable blackness. In that blackness Lovecraftian monsters may lurk. Nightmares and waking life became inextricably mixed. And in this cognitive debacle science could not be trusted to reveal secrets or to be on our side. The seeds of postmodern pessimism go a long way back.

But we now have the final irony. The concluding words of the Horizon narrator were:

"Now we’re told we may not even be real. Instead we may merely be part of a computer program, our free will as Newton suggested is probably an illusion. And just to rub it in, we are being controlled by a super intelligent superior being, who is after all the master of time."

The notions that we are being simulated in the mind of some super intelligence, that a naïve concept of free will is illusory, that we can know nothing of this simulating sentience unless that super intelligence should deign to break in and reveal itself are all somehow very familiar old themes:

….indeed He is not far from each of us, for in Him we live and move and have our being…” (Acts 17:27-28)

My frame was not hidden from You when I was made in the secret place. When I was woven together in the depths of the earth., Your eyes saw my unformed body. All the days ordained for me were written in Your book before one of them came to be” (Ps 139:15&16)

“…no one knows the Father except the Son and those to whom the Son chooses to reveal Him.” (Mat 11:27)

Have those harmless but brainy eccentric scientists brought us back to God? If they have, then in a weird religious sort of way they have sacrificed the absolute status of physics in the process.

Relevant Link:
http://quantumnonlinearity.blogspot.co.uk/2007/01/goldilocks-enigma.html

Saturday, August 01, 2015

The Thinknet Project. Footnote: Incomputability and the Biological Mind

Sir Roger Penrose on the nature of conscious cognition

This present post is really a footnote to my Thinknet series. In that series I am exploring a computerised simulation of connectionism in the hope that it might throw light on the subject of intelligence/mind. So having embarked on a series investigating the nature of thinking with the foundational assumption that intelligence is a process that can be simulated using algorithms, at the end of part 1 I asked the following question:

There was another question that was waiting for a rainy day: Where does Penrose’s work on mental incomputability fit into all this, if at all?

In his books The Emperor’s New Mind and Shadows of the Mind Roger Penrose suggests that incomputability is a necessary condition (but presumably not a sufficient condition) for conscious cognition. That is, according to Penrose what sets apart human intelligence from algorithmic intelligence is that the former somehow exploits incomputable processes. But not everyone is happy with Penrose’s proposal. For example Wiki quotes Marvin Minsky :

Marvin Minsky, a leading proponent of artificial intelligence, was particularly critical, stating that Penrose "tries to show, in chapter after chapter, that human thought cannot be based on any known scientific principle." Minsky's position is exactly the opposite – he believes that humans are, in fact, machines, whose functioning, although complex, is fully explainable by current physics. Minsky maintains that "one can carry that quest [for scientific explanation] too far by only seeking new basic principles instead of attacking the real detail. This is what I see in Penrose's quest for a new basic principle of physics that will account for consciousness."

I wouldn’t say that I’m too chuffed myself about Penrose’s proposal; if it is true then it puts the blocks on all attempts to find algorithmic simulations addressing the problem of intelligence that use standard connectionist models of the mind such as we see in De Bono’s The Mechanism of Mind. Penrose is also challenging positions like that of John Searle who believes that whilst conscious cognition is algorithmic, algorithmics alone is not a sufficient condition for it: For according to Searle a sufficient condition for conscious cognition is that its algorithms are realised in biological qualities. Therefore constructing a formally correct structure of say “beer cans” (as Searle puts it) would only amount to a simulation and not the-thing-in-itself So, although Searle believes that conscious cognition requires a specialist biological ontology he nevertheless believes that that ontology has an algorithmic formal structure. Even if one should propose that the human mind is some kind of quantum computer (in itself a radical proposal that is very controversial) human thinking would still classify as a classically algorithmic process and that simply doesn’t go far enough for Penrose!  Penrose is nevertheless making a serious proposal that needs evaluating.

So what then is incomputability? In his book Algorithmic Information Theory, Gregory Chaitin (following Turing) has a back-of-the-envelope proof of the existence of incomputable numbers….. see the picture below:

Chaitin's proof of incomputable numbers (Click to enlarge)

As Chaitin notes above Turing’s halting theorem and Godel’s incompleteness theorem follow on as corollaries from his Cantor diagonal slash proof of the existence of incomputable numbers (See picture above).

In his proof Chaitin allows his computations to generate numbers of indefinite length. For example, he permits an obviously computable periodic number such 0.123123123123…etc which disappears off into the infinite distance. Chaitin also doesn’t put a limit on the number of possible computations (or algorithms) that might exist. However, in practice we know that physical limits constrain both the size of the numbers an algorithm can generate and the number of possible algorithms. So let us assume that practical constraints imply that both the number of algorithms and the number of digits generated have roughly the same value and let’s call that value n. This value n will give us a square of n x n of digits. The finite computable numbers that form the rows of this square can be thought of as configurations of digits. Clearly then the n x n square contains n configurations of digits of length n.  But Cantor’s diagonal slash method shows that it is very easy to construct a configuration that doesn’t appear in this rather limited set of practically computable numbers and the reason for this are also fairly obvious – the square contains a mere n numbers, but of course given a configuration of length n it is actually possible to construct a much larger number of configurations and this number is quantified by an exponential of form:

 ~ A exp[kn]
(1)

….where it is clear that:
  A exp [kn] >> n
(2)

It follows then that there are far more conceivably constructible configurations of digits than there are practically computable configurations. Of course, we can continue to extend the value of n, but according to (2) the set of practically incomputable configurations increases in size much faster than n and this goes on into the infinite realm where we find  absolutely incomputable configurations.

The foregoing tells us that as n increases there is a spectrum of incomputability running from progressively impractical levels of computability right through to the absolute incomputability at infinite n. Given this spectrum it seems to me that Penrose’s proposal identifying absolute incomputability as a necessary condition of conscious cognition is a little arbitrary; for perhaps the condition for conscious cognition emerges at some finite point along the road to absolute incomputability? However, if we give Penrose the benefit of the doubt then according to Penrose incomputability of process is one of the conditions that sets apart biological intelligence (and which presumably applies to a wide range of animals such as primates, cats, dogs, dolphins etc) from classical algorithmic intelligence. In particular in his book “Shadows of the Mind” Penrose constructs what he believes to be a specific example demonstrating the human ability to think beyond computability; an analysis of his reasoning here is on my to do list. But for the moment suffice to say that the example he gives supporting his notion that humans have knowledge of the incomputable does not entirely convince me: Incomputable knowledge pertains to information about those complex and infinite incomputable patterns. But at no point in Penrose’s example did I feel I was party to anything other than knowledge of fairly regular patterns. Still, I really need to look at his reasoning more closely.

There is, however, one relatively prosaic way in which the human mind could tap into an incomputable process. In Edward De Bono’s book The Mechanism of Mind it is clear that in his models the action of thinking modifies the brain’s memories and conversely those memories effect the way thinking develops. So thinking effects memories and memories effect thinking – in short we have a feedback system here;  that is, nonlinearity is part of the brain’s natural processing. This raises the possibility of complex chaotic behaviour which in turn has the potential to amplify up the randomness of the quantum world.  Randomness, if understood in its absolute sense, is incomputable and it is possible (although never absolutely provable) that random quantum leaping delivers incomputable configurations to our world. If the mind is chaotically unstable enough to tap into these random configurations its behaviour therefore becomes incomputable. In fact it is quite likely that this kind of common-or-garden incomputability is present in the brain anyway – it doesn’t require the mind to tap into some new and exotic incomputable physics or even for the mind to be a quantum computer. This kind of incomputability, however, is clearly not a sufficient condition for conscious intelligence: After all, the asteroid Hyperion is tumbling chaotically and may well be sensitive to random quantum fluctuations, but that doesn’t make it conscious!

Although I’m not particularly comfortable with Penrose’s proposal it is, nonetheless, worth entertaining as a possibility. The fact is we still don’t really know just what are the sufficient physical conditions (i.e. the conditions as observed by the third person) which entail the presence of that enigmatic conscious first person perspective of biological brains. If on the other hand Penrose is wrong and incomputability is not a necessary condition for the first person perspective then as Penrose points out in his book Shadows of the Minds it then becomes possible in-principle to at least simulate human intelligence using a sufficiently powerful algorithmic computer. For those who reject the first person as a valid perspective then a simulation of this kind, which would presumably be thorough enough to pass the Turing test, classifies as fully sentient by definition. It is true that nowadays computerised simulations of all sorts are looking increasingly realistic and one can imagine improvements to such an extent that these simulations could fool a lot of the people a lot of the time. But there remains a deep intuition that appearances, no matter how good, are never logically equivalent to a genuine first person perspective, a perspective that is otherwise inaccessible from the third person perspective.

AI expert Marvin Minsky would prefer to think that the quest for an understanding of human sentience is purely a question of understanding its formal complexity and that the ontology which reifies that formal complexity is not relevant. Although I am unsure about Penrose’s proposal that incomputability is a necessary condition of sentience I nevertheless agree with his feeling that the existence of the kind of conscious cognition which is at the heart of practical levels of intelligence points to something about our world which we don’t yet fully understand. That I’m at one with Penrose on this question is indicated by the following quote taken from a post where I criticise IDist Vincent Torley:

 Stupidly, to my mind, Torley even claims that an intelligent agent might be insentient: To date this seems highly implausible. A designing intelligence would have to be highly motivated; our current understanding is that all highly motivated goal seeking complex systems are the seat of a motivating sentience: For example, it is a form of irrational solipsism to suggest that the higher mammals are anything other than conscious/sentient. Notably, Roger Penrose book "Shadows of the Mind" is based on the premise that real intelligence and consciousness go together.

Following John Searle I’m inclined toward the conclusion that intelligence is not just its formal structure: Viz: A correctly arranged formal structure made of beer cans wouldn’t be a practical manifestation of intelligence – in fact no simulation would be practical, any more than we would expect an aircraft simulator to actually fly! It follows then that we never could have the insentient intelligence that Torley speaks of because any real practical level of intelligence would need to exploit whatever processes deliver sentience and conscious cognition. It is for this reason that I believe we should respect high levels of biological intelligence in all its forms; dogs, cats dolphins primates, elephants and, in my opinion, also human embryos. .

Be all that as it may, there is one thing I am fairly sure about: In the connectionism sketched out by Edward De Bono in his Mechanism of Mind we are starting to see an uncovering of some of the important principles required for intelligence, although I guess there is whole lot more to discover: As physicists are all too aware, the disconnect between gravity and quantum mechanics is a sure sign that our knowledge of the world about us is still very partial.