After reading Hubert L Dreyfus paper on Heiddeggerian Artificial Intelligence I was left with many impressions and thoughts, but amidst it all I had the feeling that he is onto something. Dreyfus is a philosopher and is thus inclined to speak in very general, abstract and impressionistic terms. Take this key comment by Dreyfus, for example:
Rather, acting is experienced as a steady flow of skillful activity in response to one's sense of the situation Part of that experience is a sense that when one's situation deviates from some optimal body-environment gestalt, one's activity takes one closer to that optimum and thereby relieves the "tension" of the deviation. One does not need to know what that optimum is in order to move towards it. One's body is simply solicited by the situation [the gradient of the situation’s reward] to lower the tension. Minimum tension is correlated with achieving an optimal grip.
I think I understand that: The goals one aims for are not literally envisaged by our minds but are implicit in that one is less aware of goals than how one is supposed to move toward them. The situations confronting us cause a response but not necessarily by way of an internalized model that allows us to envisage the goal of that response. However, if Dreyfus’ ideas are to be realized in hardware and software how does one reduce terms like “optimal body-environment gestalt”, “tension”, “soliciting” “optimal grip” to bits, bytes and “if then elses”?
From the outset I was immediately attracted to Dreyfus phenomenology: phenomenology is a philosophy founded in the realization that our experience of the world and our thoughts about it are to all intents and purposes the extent of our cosmos. This existential philosophy side steps the horrendously intractable Cartesian problem surrounding the ontological distinction (if any) of noumena and cognita by positing conscious cognition as the effective center of our cosmos. From Dreyfus phenomenology follows his starting observation:
[T] he meaningful objects ... among which we live are not a model of the world stored in our mind or brain; they are the world itself.
Dreyfus starting point makes for a huge economy in his version of AI: A complex model of the world does not need to be carried around in our heads when in fact our experience of that world, delivered to our brains by our senses, will probably serve far better: all we are asked to do is to react to that perceived model and not to exhaustively envisage it. The real world is in effect our 'core memory' and we are but the depository of the neural 'algorithm' that tells as how to react to the contents of that 'memory' as we move around our world. In particular, if the world itself is our model then we don’t have to internalize a model of how it reacts to our actions. In principle our actions could have vast and ramifying effects on the rest of the world and a comprehensive internal model of reality would have to include the logic required to model these knock on effects; the problem of trying to somehow cater for the possibility of these escalating effects is called the “frame problem”.
Dreyfus looks to be a fairly abrasive character and uses general terms that can be very slippery. It is easy to misinterpret him and anyone who tries to use Dreyfus' ideas and turn them into something workable is probably taking a similar risk to those who attempt to articulate the meaning of the Holy Trinity and open themselves up to charges of heresy. Dreyfus is a hard task master. He is AI's prophet of doom and as is the prerogative of prophets of the infinitely complex he tends to work apophatically; that is, he is much clearer about what human intelligence is not, rather than what it actually is. Perhaps this is a good thing because there has been so much hype and over optimism in AI that it cries out for a judgmental preacher. It is very difficult to do justice to the infinitely complex and the scientific equivalent of charges of blasphemy and idolatry as humans attempt to create images of our their own selves reminds us not to be too complacent about progress in the face of simplistic and unrepresentative models. Did I just say ‘model’? Aren’t they the things that Dreyfus says we shouldn’t be using?
My own guess is that human intelligence solves the frame problem in a plurality of ways. The ‘absorbed coping’ that Dreyfus talks about may well be found in intelligent organisms like ourselves; his view is that such organisms are dynamic systems coupled to their environment via stimuli which are not processed using representations and models, but these stimuli succeed in ‘soliciting’ the right responses without the use of representations and models. It is likely that humans have inherited this computationally economic modus operandi. And yet it seems to me that humans also appear to model the world computationally in the internal Cartesian sense: Humans can and do reflect on ‘external systems’ and can anticipate their behavior without coming into contact with them and being prompted by them. However, often this reflection may make use of pencil and paper jottings and various external contrivances that help prompt thinking, thus betraying the roots of human intelligence in organisms coupled dynamically to their environment. If the human mind does do symbolic modeling it may not actually be very good at it as a standalone system.
There is one other characteristic of the human mind suggesting that “Dreyfus is right but....”. In a connectionist model of the mind, everything is connected to everything else through pathways that may be no longer than ~ Log(N), where N is the number of neurons in the brain. Thus when attention is focused on one activity like say language translation, the whole of the mind’s accumulated connectionist experience is never far away in access terms and thus the whole domain of an individuals experience can be brought to bear on a problem. The mind has the potential to use the widest frame available to it and there may be no artificial frame or relevance boundary arbitarily drawn within the domain of one’s experience. In short the human mind may not even attempt to solve the frame and relevance problems and instead throws all its knowledge resources at the situations it meets. Learning never stops and the human mind is therefore always placing contexts within new contexts. Problem solving truly is an open ended activity.
Summary:
Human intelligence uses three modus operandi:
1. Heiddeggerian: Human intelligence uses the world itself as its own model.
2. Modelling: Human intelligence has a mental facility to model, but that facility betrays an inheritance from 'Heiddeggerian' organisms by making frequent use of 'external' world contrivances such as pencil and paper.
3. Isotropy: Human intelligence, via connectionism, does not attempt to impose any a-priori limitations on what knowledge resources are relevant to a situation. That is, it doesn't attempt to solve the frame problem.
Rather, acting is experienced as a steady flow of skillful activity in response to one's sense of the situation Part of that experience is a sense that when one's situation deviates from some optimal body-environment gestalt, one's activity takes one closer to that optimum and thereby relieves the "tension" of the deviation. One does not need to know what that optimum is in order to move towards it. One's body is simply solicited by the situation [the gradient of the situation’s reward] to lower the tension. Minimum tension is correlated with achieving an optimal grip.
I think I understand that: The goals one aims for are not literally envisaged by our minds but are implicit in that one is less aware of goals than how one is supposed to move toward them. The situations confronting us cause a response but not necessarily by way of an internalized model that allows us to envisage the goal of that response. However, if Dreyfus’ ideas are to be realized in hardware and software how does one reduce terms like “optimal body-environment gestalt”, “tension”, “soliciting” “optimal grip” to bits, bytes and “if then elses”?
From the outset I was immediately attracted to Dreyfus phenomenology: phenomenology is a philosophy founded in the realization that our experience of the world and our thoughts about it are to all intents and purposes the extent of our cosmos. This existential philosophy side steps the horrendously intractable Cartesian problem surrounding the ontological distinction (if any) of noumena and cognita by positing conscious cognition as the effective center of our cosmos. From Dreyfus phenomenology follows his starting observation:
[T] he meaningful objects ... among which we live are not a model of the world stored in our mind or brain; they are the world itself.
Dreyfus starting point makes for a huge economy in his version of AI: A complex model of the world does not need to be carried around in our heads when in fact our experience of that world, delivered to our brains by our senses, will probably serve far better: all we are asked to do is to react to that perceived model and not to exhaustively envisage it. The real world is in effect our 'core memory' and we are but the depository of the neural 'algorithm' that tells as how to react to the contents of that 'memory' as we move around our world. In particular, if the world itself is our model then we don’t have to internalize a model of how it reacts to our actions. In principle our actions could have vast and ramifying effects on the rest of the world and a comprehensive internal model of reality would have to include the logic required to model these knock on effects; the problem of trying to somehow cater for the possibility of these escalating effects is called the “frame problem”.
Dreyfus looks to be a fairly abrasive character and uses general terms that can be very slippery. It is easy to misinterpret him and anyone who tries to use Dreyfus' ideas and turn them into something workable is probably taking a similar risk to those who attempt to articulate the meaning of the Holy Trinity and open themselves up to charges of heresy. Dreyfus is a hard task master. He is AI's prophet of doom and as is the prerogative of prophets of the infinitely complex he tends to work apophatically; that is, he is much clearer about what human intelligence is not, rather than what it actually is. Perhaps this is a good thing because there has been so much hype and over optimism in AI that it cries out for a judgmental preacher. It is very difficult to do justice to the infinitely complex and the scientific equivalent of charges of blasphemy and idolatry as humans attempt to create images of our their own selves reminds us not to be too complacent about progress in the face of simplistic and unrepresentative models. Did I just say ‘model’? Aren’t they the things that Dreyfus says we shouldn’t be using?
My own guess is that human intelligence solves the frame problem in a plurality of ways. The ‘absorbed coping’ that Dreyfus talks about may well be found in intelligent organisms like ourselves; his view is that such organisms are dynamic systems coupled to their environment via stimuli which are not processed using representations and models, but these stimuli succeed in ‘soliciting’ the right responses without the use of representations and models. It is likely that humans have inherited this computationally economic modus operandi. And yet it seems to me that humans also appear to model the world computationally in the internal Cartesian sense: Humans can and do reflect on ‘external systems’ and can anticipate their behavior without coming into contact with them and being prompted by them. However, often this reflection may make use of pencil and paper jottings and various external contrivances that help prompt thinking, thus betraying the roots of human intelligence in organisms coupled dynamically to their environment. If the human mind does do symbolic modeling it may not actually be very good at it as a standalone system.
There is one other characteristic of the human mind suggesting that “Dreyfus is right but....”. In a connectionist model of the mind, everything is connected to everything else through pathways that may be no longer than ~ Log(N), where N is the number of neurons in the brain. Thus when attention is focused on one activity like say language translation, the whole of the mind’s accumulated connectionist experience is never far away in access terms and thus the whole domain of an individuals experience can be brought to bear on a problem. The mind has the potential to use the widest frame available to it and there may be no artificial frame or relevance boundary arbitarily drawn within the domain of one’s experience. In short the human mind may not even attempt to solve the frame and relevance problems and instead throws all its knowledge resources at the situations it meets. Learning never stops and the human mind is therefore always placing contexts within new contexts. Problem solving truly is an open ended activity.
Summary:
Human intelligence uses three modus operandi:
1. Heiddeggerian: Human intelligence uses the world itself as its own model.
2. Modelling: Human intelligence has a mental facility to model, but that facility betrays an inheritance from 'Heiddeggerian' organisms by making frequent use of 'external' world contrivances such as pencil and paper.
3. Isotropy: Human intelligence, via connectionism, does not attempt to impose any a-priori limitations on what knowledge resources are relevant to a situation. That is, it doesn't attempt to solve the frame problem.
2 comments:
Interesting article Tim. I have just got round to re-reading your ‘open gospel’ and I think it is one of your finest works. As you know, I’m very much in agreement with you about ‘open gospel’ and you have my full support in propagating such an idea.
Regarding Heideggerian Intelligence et al - I have reads papers on these subjects in the past. I am currently working through some papers on physics and some on neuroscience - looking for similarities and patterns that will give extra support to ideas about mind giving the appearance of a small cosmos (or if you prefer, cosmos giving the appearance of one vast mind).
Commenting on the quote you extrapolated....
“Acting is experienced as a steady flow of skillful activity in response to one's sense of the situation. Part of that experience is a sense that when one's situation deviates from some optimal body-environment gestalt, one's activity takes one closer to that optimum and thereby relieves the "tension" of the deviation. One does not need to know what that optimum is in order to move towards it. One's body is simply solicited by the situation [the gradient of the situation’s reward] to lower the tension. Minimum tension is correlated with achieving an optimal grip.”
Yes, any envisaging done within the mind is it seems sublimely implicit in ‘mind’ - in fact, terms such as ‘goals’ and ‘aims’ are, I would say, accretive; that is, mind exploring some of the greater potential of its own artefacts. However, if Dreyfus’ ideas are to be realised in cosmological models - hardware and software - analogues with ‘mind’ phenomenology are only sparse idioms found in ‘software’ itself. That is to say, if the totality of created ‘mind’ is the extent of the Simulacrum (the created simulation of the Divine realm) then yes, the “Cartesian problem surrounding the ontological distinction (if any) of noumena and cognita” is non-existent. If conscious cognition is the primacy of the Divine simulation, and we have only God and creation, anything in the latter must itself qualify as conscious cognition.
And yes I agree with you that regarding the systemic whole, the objects among which we live are not a model of the world stored in our mind or brain; they are the conscious cognition itself - it’s all mind!!. Now of course, from my own position, attempting to conflate mind and cosmos, it is very difficult to do justice to the infinitely complex as there is no scientific equivalent beyond abstract reasoning, which seems to tie in with your next comments.
The Simulacrum does seem to be our model, and our sparse sampling of the systemic whole does not require that we internalise a model of how it interrelates with each first person selfhood - it is already internalised in selfhood. I suppose it is a bit like ‘personality; in that one cannot ever sample the whole (not even one’s own) and yet the ramifying effects on other parts of reality seem to be internalised within personality itself. In other words, it is aspectual, yet it contains its own interrelational mechanism that automatically reconfigures itself irrespective of which aspects of outer reality it interrelates with. Just as a comprehensive aspectual model of reality includes the cognitive mechanism required to model these effects, personality includes all the aspectual models of reality that cater for its own essence.
I like your term ‘absorbed coping’. The mathematical ‘absorbed coping’ is clearly found in intelligent organisms like ourselves - even at the most basic level, geometrical and numerical knowledge would have helped with survival; and because of the stable contour lines that run through morphospace, evolutionary diversity has given us a ‘framework’ view that organisms are dynamic systems coupled to their environment via stimuli which are not processed using representations and models. That these stimuli succeed in soliciting the right responses without the use of representations and models does not mean that there is no wider model framework into which degrees of intentionality operate; for in fact, human minds have a degree of computational and informational content that suggest a representative modeling framework somewhere, particularly bearing in mind that the cosmos itself is amenable to computational and informational explanations.
Is this the biggest clue yet that matter IS mind - that the whole cosmos is one vast mind? The fact that when it comes to noumenal things we can predict and logically infer suggests that with mind we have the potential to explore the aspectual nature of mind while knowing that ‘mind’ as a whole is beyond us. Not only is this what separates us from the animals, but this probably explains why we can anticipate X, Y or Z without coming into contact with X, Y or Z.
Could this be what separates modern man from his proto-human progenitors - the point at which God really did put something into man - the first Adam? Because one thing seems certain, regarding proto-humans, we know of various ‘external contrivances’ that help prompt thinking, but it seems pretty clear that if the proto-human mind did any symbolic modelling it wasn’t very good at it, not as an intrinsic aspectual system. In other words, the only creatures that could apprehend the mathematical whole and, perhaps more importantly, enough of the concept of ‘mind’ to apprehend ‘mind’ are the creatures into which God put the requisite parts of Himself, namely Adam and his descendents (which, of course, includes us). If natural selection shows that organisms are dynamic systems coupled to their environment via stimuli which are not processed using representations and models, there must be something else to explain why it seems to human minds that ‘mind’ IS the frame.
This would be remarkable, it would mean that the cosmic blueprint contained within it an algorithmic program that took care of every eventuality that was to be part of the Creator’s plan, and that ‘mind’ itself already contains all the potentiality for everything within the Simulacrum because the Simulacrum IS mind. Every bit of relevant information necessary for the simulated realm must be already present - in mathematical terms, all possibilities are embedded somewhere in the mathematical accretions contained within the Simulacrum. So when the mind registers something ‘new’ we are only speaking of ‘new’ in the sense of being ‘new’ to the senses, not new to the Simulacrum. And given the qualitative difference between ‘minds within mind’ and ‘mind’ itself, it seems that acquisition of knowledge and intellectual supplementation are truly open ended, at least as far as our limitations are concerned within the vastness of the Simulacrum.
This means, of course, that there is (by definition of ‘open ended’) always more effort yet needed in order to crystallise this work, but the similarities between mind and cosmos seem emphatic, particularly as the Simulacrum, if my model is correct, gives the appearance of its own up and running intelligence.
You touch on something very important when you say that ‘intelligence’ is isotropic (it has properties with the same values in all directions). Because it is all embedded in the algorithmic whole, it does not attempt to impose any a priori limitations on which knowledge resources belong to which aspects of the whole because it is all part of the systemic whole. If it is true that whenever we talk of something within the Simulacrum we are talking about ‘mind’ then naturally human intelligence need not interrelate resources to other aspectual situations because the nature of ‘mind’ already takes care of that problem.
Keep up the good work, Tim, in these dark corners of nuclear blogosphere.
Regards
James
Thanks very much James for the comments
This noumena vs. cognita distinction, or non-distinction, as the case may be, presents a real knotty problem, a problem in fact that I have grave doubts about ever reaching a solution. I say ‘solution’ but then it may not even be possible to couch the problem itself in a clearly intelligible way. Superficially noumena seem to be those things that our perceptions suggest are out there ‘beyond’ our mind; in particular the concept of space suggests something displaced from our own being, in ‘another place’, and therefore other than ourselves, thus being the prime metaphor for the word ‘beyond’. And yet our only contact with noumena is through the senses interface – an in principle impenetrable barrier like the bullet proof glass in front of a ‘bank cashier’ through which we carry our daily transactions with the ‘back office’ of noumenal reality. As I think has been pointed out before the concept of our bank account has a very different reality in the actual workings of the bank itself, but the interface with have with the bank maintains a host of minor deceptions such as ‘our money’ and ‘our account’. But in the bank itself there is no such thing as ‘our money’; one can’t distinguish ‘my pounds’ any more than one can distinguish bits with a definite identity in the shufflings of binary 1s and 0s inside a computer; it is actually meaningless to talk about binary bits as if they have an identity like grains of dust. (Interestingly down at the sub atomic level particles have similar issues about identity)
The noumena vs. Cognita problem is a classic in metaphysical problems; there is no way in principle of submitting it to empirical enquiry. Any empirical data, needless to say, is grounded, by definition, in our senses and thoughts, the very thing we are trying to get away from in order to break through to the nature of noumena. Noumena then, could, after the fashion of the logical positivists, be declared as an unintelligible concept and therefore a problem or question that can’t be even framed, a question that cannot even be asked. But that lands right back where we started: reality is mind or at least the mind and the interface it has with an apparent reality ‘beyond’ the bullet proof interface.
However, there is great intuitive difficult in gainsaying the instinct that what comes through that interface is evidence of some actual stuff beyond it. In particular those distant galaxies and those dinosaurs were (or are) really there in some deep sense. Hence in order to resolve this intuitive contradiction and yet do justice to the fact that attempts to get round senses interface renders the notion of ‘noumena’ all but unintelligible, (suggesting mind has a very primary position), I have taken up a Berkelian idealism of sorts. Hence the concept of simulation and the simulacrum. The latter also addresses the difficulty of the apparent inability of our universe to deliver aseity. Hence our universe can never exist as an independent reality and can only be simulated and sustained on moment by moment basis presumably by some deeper reality that we identify as God who has Aseity
Post a Comment