Here is a link to a paper on Artificial Intelligence sent to me by Stuart. I’m still absorbing the contents of this paper, but in the meantime here are a few lines of hot mail conversation that I had with Stuart whilst I was still reading this paper.
Timothy says:
I’m hoping to do a blog entry on it (the paper)
Stuart says:
Yes. It is an interesting paper
Timothy says:
Yep
Stuart says:
Better towards the end when he discusses Freeman(?)'s stuff
Timothy says:
I'm on that bit now
Stuart says:
Yeah. AI has been a bit of a failure. Which is why the phenomenological perspective in HCI and AI has been very important
Timothy says:
I agree. AI was even starting to look a failure as far back as the seventies. In the 60s I swallowed the message that we would have HAL machines by the end of the millennium, but then I has only 14
Stuart says:
Haha. Yes. You can be forgiven. I think Dreyfus is a bit harsh but unfortunately he's right
Timothy says:
Looks like it (Editors note: well, we shall see!)
Stuart says:
Even simple things like computer vision end up facing the solve-all-AI problem. Divide-and-conquer research strategy doesn't work. And perhaps we can argue that this is present in other disciplines
Timothy says:
Yes everything taps into a myriad associations. Similar with language translation: you can't translate everything without an enormous cultural knowledge.
Stuart says:
Well indeed
I must admit that Stuart’s comment about the divide and conquer strategy failing gave me a slight attack of the jitters: Does it mean that an incremental evolution can’t evolve intelligence in a piece meal, step by step fashion? ID here we come? In fact what of our own ability to solve problems given that we have a limited quantum of intelligence? It may well be true that certain problems are insoluble given a limited 'step size' whether that step size is limited by random walk or by human capabilities. However whether or not it is possible to solve any problems at all depends on the existence or otherwise of those “isobaric” lines of functionality conjectured to run continuously through morphospace. In the case of biological evolution those lines must be lines of self sustaining (=stable) funtionality.
Timothy says:
I’m hoping to do a blog entry on it (the paper)
Stuart says:
Yes. It is an interesting paper
Timothy says:
Yep
Stuart says:
Better towards the end when he discusses Freeman(?)'s stuff
Timothy says:
I'm on that bit now
Stuart says:
Yeah. AI has been a bit of a failure. Which is why the phenomenological perspective in HCI and AI has been very important
Timothy says:
I agree. AI was even starting to look a failure as far back as the seventies. In the 60s I swallowed the message that we would have HAL machines by the end of the millennium, but then I has only 14
Stuart says:
Haha. Yes. You can be forgiven. I think Dreyfus is a bit harsh but unfortunately he's right
Timothy says:
Looks like it (Editors note: well, we shall see!)
Stuart says:
Even simple things like computer vision end up facing the solve-all-AI problem. Divide-and-conquer research strategy doesn't work. And perhaps we can argue that this is present in other disciplines
Timothy says:
Yes everything taps into a myriad associations. Similar with language translation: you can't translate everything without an enormous cultural knowledge.
Stuart says:
Well indeed
I must admit that Stuart’s comment about the divide and conquer strategy failing gave me a slight attack of the jitters: Does it mean that an incremental evolution can’t evolve intelligence in a piece meal, step by step fashion? ID here we come? In fact what of our own ability to solve problems given that we have a limited quantum of intelligence? It may well be true that certain problems are insoluble given a limited 'step size' whether that step size is limited by random walk or by human capabilities. However whether or not it is possible to solve any problems at all depends on the existence or otherwise of those “isobaric” lines of functionality conjectured to run continuously through morphospace. In the case of biological evolution those lines must be lines of self sustaining (=stable) funtionality.
I was talking about divide-and-conquer in terms of AI really.
ReplyDeleteNo doubt. But I was using precisely that kind of thinking which some solutions to the frame problem find difficult to handle; namely that of throwing out very tenuous links into apparently unrelated domains on the basis of vague similarities and analogies. Some solutions to the frame problem may well impose very water tight compartments of relevance in order to prevent the wasteful upload of seemingly irrelevant domains. However, I am wondering if the human mind does not in fact impose any relevance restrictions at all; this is because the “log(N)” paths (where N ~ number of neurons) have the effect of connecting anything to just about everything else. – It’s the ‘small world theory’ in another guise.
ReplyDeleteAnalogical thinking is very important it seems: How would we have arrived at quantum theory and general relativity without the application of analogues outside these fields?
In this particular case I noticed that the “divide and conquer” strategy looks just a little bit like evolution’s incremental routes to complex survival solutions. Talk about the failure of the ‘divide and conquer’ strategy looks a little bit like irreducible complexity. So using analogical thinking my thought was this: does a seeming failure of the divide and conquer strategy in one domain port to another domain?
(Actually it’s not an evolution stopper, because, to cut a long story short, it’s all bound up with how the ‘solution space’ is organized and whether the ‘solutions’ are distributed in that space in ‘chains of islands’ or as completely isolated oases. Using analogical thinking again we find that evolution links back to where we started; that is, thinking about thinking: Thinking, like evolution, will work as a ‘solution’ generator if there is the right mix of a well laid out ‘solution space’ and intrinsic mental abilities that confer a ‘step size’ sufficient to allow ‘mental’ hops from one solution to the next: in short, evolution is a limiting form of intelligence! So the ID vs. Evolution debate is in fact and ID vs. ID debate, or if you like an Evolution vs. Evolution debate!)