Wednesday, June 27, 2018

Jordan Peterson and the Wild Web


Male role model: Jordan Peterson debates some opponents


Chris Erskin, who wrote a comment to my post here, has urged that I get up to speed on the Jordan Peterson affair. The following is my first foray into this business and represents my back-of-the-envelopment thesis on the phenomenon. I have conceptualized this thesis on the basis of my rather limited sample of experience so far and I have quite intentionally banged it out as a quick pro tempore treatment before I learn too much about the good professor and the furore he is at the centre of. The reason for this is to see how well this first thumbnail sketch, which boils down my experience and thinking to date, bears up as I learn more; how well can the human mind form a conclusion on a small sample of data and get it right?

*** 


“Male” and “Female” are fuzzy multidimensional categories that have competing thematics. For example, at the extremes (and I stress "extremes") one can’t act as a single minded hunter and at the same time a multitasking child rearing gatherer. Competing thematics is common in a systems theoretic context. For example, armour and protection often compete with mobility; it's difficult to satisfy both. Ask a lobster.

However, I must stress the fuzzy multidimensional nature of the male/female categories from which, in advance, we can predict the possibility of such as thing as “blended gender”. As with other biological categories we are likely to find a moderately loose clustering around norms, norms that our conceptual vision often posterizes

OK, so let’s proceed on the assumption that there is such a thing as fuzzy & normative (and I stress "fuzzy" & "normative")  male-female gendering which is a complex and probably largely unknown function of nature and nurture. 

This is where Peterson comes in: My current working hypothesis is that our Western culture, no doubt as a result of societal role changes since the industrial revolution, has, in the long term, resulted in a situation where the normative clustering we find around male and female models is not 100% appropriate to the current societal set-up. This has led to hard feelings on both sides of the normative gender distribution: Females well able to take up tasks otherwise prejudiced to them have felt marginalized and males feel that their masculine dominating & leading role has become threatened.

Enter Jordan Peterson the cool fast talking champion of the male-leaning gender. Intellectually he looks to be incredibly fast on the draw and as well able to handle himself as Clint Eastwood is in the Wild West. A perfect male role model.

He’s applauded by both right wing atheist and Christian males who perceive him to articulate what they have been feeling in their guts for a long time, especially the threat on the polarized conceptions of gender and the denigration of maleness. Many men cheer him on as their champion as he expresses and argues so well for what they instinctively feel.

So summarizing: I see the Jordan Peterson phenomenon as a reaction to:

a)  A societal structure which isn’t a hand-in-glove fit for a polarised model of male and femaleness. The hunter-gatherer and pre-industrial agrarian societies might have been a better fit in this respect.

b) The fact that it has become increasingly apparent in recent times that when "God created them male and female"  this wasn't a clear cut binary distinction but two fuzzy multidimensional categories of a normative distribution.

There may be other aspects at play here as well: The up-and-coming eco-movement, which so often strikes a chord with the female gender, perhaps doesn’t sit so well with the stereotypical male go-out-conquer-and-exploit role.

Finally I must add that I’m not a Marxist social reductionist, Postmodernist or anti-free market. Marxism, certainly in its early forms, had a very weak conception of human nature and still has.  (I think Peterson is probably right about that). But then neither am I a libertarian. Relevant links in this connection are:  
***

Anyway, that’s my current intellectual state play regarding the good professor. However, it's early days yet. No doubt more analysis to come! This is just my first shot!



***



ADDENDUM 10/7/18

The day after I drafted the above article my copy of "Premier Christianity" landed on my doormat. It featured an article on the Peterson enigma. Several times the article remarked on his young male following (Probably white males I suspect). e.g.:

Why are so many young men following him? And why should Christians care?

Perhaps by way of answer the article later remarked:

Comparisons have been made to the muscular Christianity that led so many young men to follow church leader Mark Driscoll at one time.

Letting males be males may have something to do with it although I doubt Peterson would be pleased to be compared to Mark Driscoll, and rightly so! (See here and here). The article quotes Peterson as saying:

 "Men and women are not the same"

That, in my view, is very likely true! As I have said above human nature is a largely unknown function of both nature and nuture with gender being fuzzy and normative. But I regard  the social reductionism of Marxism (i.e. positing human beings as economically interchangeable parts) as much an error as does Peterson.  And yet Peterson is also quoted as saying:

...the idea that women were oppressed throughout history is an appalling theory.

Well, it's undoubtedly true that some parties may have an interest in overstating the historical oppression of women, but to my understanding of history these words by Peterson seem as much an overstatement as what he is speaking out against!

On the subject of Peterson's version of Christianity we read:

Peterson has consistently refused to be pinned down on his personal religious convictions. When I pressed him on it, he described himself as a "religious man" who was "conditioned in every cell as consequence of the Judaeo-Christian worldview".  The closest I could get to whether he really believed in God was that he lives his life "as though God exists", saying "The fundamental hallmark of belief is how you act, not what you say about what you think".

In a world poised to react to information what one says is in itself an act. Therefore the acts vs words dichotomy is a distinction difficult to maintain.

However I was interested in this quote from one of his books:

"I knew that the cross was simultaneously the point of greatest suffering, the point of death and transformation, and the symbolic centre of the world"

...and that reminds me of something I wrote on pages 4 & 5 of this document

Tuesday, June 26, 2018

Artificial Intelligence, Thinknet and William Dembski

If I'm right then those anticipated AI enabled super intelligences with huge AI resources
 and information at their disposal have arrived and they are walking our streets already!


I was intrigued to see a blog post by Intelligent Design guru William Dembski where he gave his opinion on artificial intelligence. This post can be found here:


I have always thought of Dembski as a nice Christian bloke, with some useful ideas. But he is a person who, in the minds of many, has been too closely linked with a vociferous right-wing anti-atheist anti-academic establishment Christian culture, a culture which I suppose in some ways is an function of the left-right polarization in US politics and culture. This may explain why a Mr. Nice Guy like Dembski has been unpleasantly abused by atheists.

Dembski's work has shown that the creation of life, whether via evolution or other means, requires a huge burden of information and this conclusion is undoubtedly correct (see here). But then even atheists like PZ Myers and Joe Felsenstein will tell us that evolution is not a purely random process (see here , here and here) and therefore by implication must somehow be tapping into a high level of background information. Moreover, Dembski's ideas, on his own admission, don't directly contradict standard evolution provided one admits that for standard evolution to work, it must be exploiting a priori information resources (See here).

But it wasn't just atheists who were to abuse Dembski. In 2016 (or thereabouts) Dembski was given his marching orders from South Western Baptist Theological Seminary (See here). He had proved too conservative for the "liberal" evangelicals at Baylor university and now too "liberal" for the fundamentalists. Of late he has been somewhat out in the wilderness, neither comfortably fitting in with liberal evangelicalism nor fundamentalist evangelicalism. There may also be signs that even the de-facto IDists aren't too happy with him. Fundamentalist theme park manager Ken Ham has waded into the anti-Dembski fray with his inimitable line of spiritual abuse; in Dembski's own words Ham "went ballistic" at some of Dembski's theology (But then Ham is in a constant ballistic state about Christians who don't agree with him ). True, some of Dembski's theology does seem a little novel but he's a brave and radical thinker who is prepared to stick his neck out and take the risks entailed by mooting new ideas; the person who doesn't make mistakes, doesn't make anything. Seek, reject and select; the subject of searching & rejecting, after all, is one of Dembski's major specialisms! Being in the trackless wilderness where you have little to loose can give one that edgy intellectualism.

However, the trail I personally have been lead down regarding the role of intelligence in our cosmos is a little different to that of Dembski's. I would characterize myself as an explorer of the notion of Intelligent Creation as opposed to intelligent design. Intelligent Creation eschews the IDist's default dualism which tends to revolve around a "natural forces" vs "intelligent agency" dichotomy. Intelligent Creation regards the processes of life creation as a form of cognition in action. In contrast de facto ID tends to treat intelligent processes as inscrutable, almost sacred ground and explicitly states that the exact nature of intelligence is beyond its terms of reference. In fact some IDists will say that the science of ID is largely about "design detection" and beyond this it has little to say. (See here). This policy, I think, in large part traces back to William Dembski's explanatory filter. This filter results in an epistemic which I have reviewed critically (See here). Roughly speaking this filter means that for the IDist "intelligence design" is a kind of default explanation whereby it is considered to be "detected" when the supply of all other ("natural") explanations is exhausted.

One of the consequences of Dembski's epistemological heuristic is that IDists have a tendency to regard the details of intelligence as beyond the pale of human reason and science; they think their work to be complete once intelligence agency has been identified and proceed little further. Another consequence is that they tend to see the input of intelligence as taking the form of injections of information that punctuate natural history, the origins of which are an enigma. Beyond claiming that these information injections are sourced in some inscrutable entity identified as "intelligence" de facto IDists are disinclined to comment further. Given all this it is no surprise that some IDists have welcomed Roger Penrose's proposal that intelligence, at least of the human kind, is an incomputable process. (See comment 4 on this Uncommon Descent post). If this is true then human intelligence has got its work cut out if it is aiming at self-understanding.

Although I think IDists should be trying harder to understand the nature of intelligence, nevertheless I would be the first to concede that human level intelligence (and beyond), either because of its sheer complexity or for other unidentified fundamental reasons, may be beyond human understanding. Since the target of human understanding is our own human intelligence we are, of course, talking about self-understanding; this is one of those intriguing self-referencing scenarios.  But in spite of a possible barrier to human self-understanding, in my Thinknet Project I'm proceeding under the working assumption that it is possible for the human mind to at least make some significant inroads to self-understanding, even if perhaps it proves it is not possible to arrive at a full understanding. However, although I'm very non-committal about how far my work on the natural process of human thinking can take me, it is very likely that the average IDist would look askance at this work because of de facto ID's commitment to the sacred and almost inscrutable role intelligence plays in their epistemic paradigm. Moreover, given the polarized state of the pan-Atlantic debate on the intelligent design question it may be that the average right-wing leaning IDist & borderline fundamentalist would see me as a lackey of the totally depraved academic establishment!

I was intrigued to see that in a digression in his post Dembski, unlike some other IDists,  rejects the idea that the human ability to understand Godel's incompleteness theorem suggests that human cognition is  an incomputable process. Dembski's digression may bear some resemblance to my own reasons for rejecting Godel's incompleteness theorem as proof that human cognition is an incomputable process. (See here for my reasoning). I won't take Dembski's digression any further here as I'm more interested in his belief that AI is unlikely to reach a human level any time soon. And that is what I will look at here.

***

In his post Dembski says this:

It would be an overstatement to say that I’m going to offer a strict impossibility argument, namely, an argument demonstrating once and for all that there’s no possibility for AI ever to match human intelligence. When such impossibility claims are made, skeptics are right to point to the long string of impossibilities that were subsequently shown to be eminently possible (human flight, travel to the moon, Trump’s election). Illusions of impossibility abound.

Nonetheless, I do want to argue here that holding to a sharp distinction between the abilities of machines and humans is not an illusion of impossibility.

I like Dembski's approach here: Unlike some of the right-wing IDists Dembski is tentative and he isn't burning his bridges: He's not arguing for the outright impossibility of human level AI. He's going for the weaker thesis that on the current evidence AI looks to be of a very different quality to human intelligence. On that weaker thesis, I have to say,  I probably agree with him. 

The argument Dembski develops goes something like this: Up until now all AI has been adapted to particular well defined problem areas, problem areas that are relatively easy to reduce to closed ended searches. But human intelligence is not restricted to the solution of well defined problems, problems specified in advance. Moreover, human intelligence cannot be modeled as an extensive library of searches managed and appropriately selected to match the problem in hand by some higher level of intelligence which Dembski identifies as a "homunculus" skilled at searching for the right search. For it seems that in human intelligence we are looking for a much more open ended ability, an ability which has the competence to move between problem areas which haven't been defined in advance and construct the search needed to solve the problem in hand. Originality, novelty and imagination are the watchwords for human intelligence. It is the competence to deal with this open endedness, something human beings are good at, which seems to be missing in current AI.

The general drift of Dembski's argument is probably correct. The highly generic catch-all nature of human level intelligence isn't found in an ability to rapidly traverse a well defined search space, but to make inroads to problems not yet conceptualized and this sets it apart from current AI. If I'm reading Dembski correctly then he saying that little progress has been made on the problem of being able to handle the generic problem. This looks to be true to me. And yet Dembski is characteristically and rightly tentative about his conclusion; for on the question of being able to program a general problem solver he says this (My emphasis):

Good luck with that! I’m not saying it’s impossible. I am saying that there’s no evidence of any progress to date. Until then, there’s no reason to hold our breath or feel threatened by AI. 


So, Dembski isn't saying that it is impossible to solve the problem of the generic problem solver, the solver who can solve cold problems; rather he is telling us that progress on this question isn't very noticeable. That question can in fact be expressed succinctly and equivalently as follows: Can the human mind understand its own processes of understanding? If so, then AI aficionados are in with a chance. But, as I once heard one AI expert say, our current attempts to create human level AI may be like a monkey climbing a tree and thinking it has made progress in getting to the moon; the sheer complexity of human cognition may yet confound the attempt. But then if the human mind is that complex, perhaps it is complex enough to wrap its mind around its own complexity!

The fact is we do not know whether the human mind is equipped to fully understand itself or not. It follows, then, that we don't know whether human level AI is attainable by human intellectual endeavor. Dembski. himself doesn't claim to be able to answer this question absolutely either way; he's just saying that a heck of a lot more work needs to be done if indeed this problem is humanly solvable. In that he's probably right.

But if the barrier to the creation of human level AI is sheer quantitative complexity (as opposed to some hidden in-principle impossibility) which requires centuries of work, we've got to make a start at some point and we may as well start now......

***

To this end my own attempts to address the AI question can be found in my Thinknet project. This project revolves around the gamble that associationism is the fundamental language with the potential to cover and render the whole of reality. This working assumption is in part based on my experience with my Disorder and Randomness project and the item-property model I employ there and which I picked up and used in my connectionist Thinknet Project.

My tentative thesis is that all search problems, or at least a wide range of search problems, can be translated into my item-property interpretation of connectionism and the searching of an association network. But having said that it is clear that to use this basic idea to build human level AI may still entail all but unfathomable construction complexities just as the relatively simple and elemental  periodic table is the first rung of the ladder on the incredible chemical complexity of life. Something of this potential complexity in cognition is glimpsed in part 4 of the Thinknet project 

And there is of course the big question of human consciousness. I hold the view that even the most sophisticated silicon-chip simulation of human neural-ware would not be conscious anymore than a near-perfect flight simulator can be counted as a flying aircraft. This is a point that philosopher John Searle makes with his Chinese room and beer-can arguments.  And yet I'm not persuaded by dualist ghost-in-the-machine ideas. I am much more inclined to believe that something about the way neural chemistry uses the fundamentals of matter has the effect of introducing conscious qualia into its otherwise formal cognitive structures.  In comparison silicon chip based computers, even though they may have the potential of simulating correctly much of the formal structure of human cognition, do not exploit the material qualities (as opposed to the formal structures) needed to bring about conscious cognition. I don't think I'm being very original in suspecting that those qualities have got something to do with the wave-particle duality of matter - something that the current "beer-can" paradigm of computing does not exploit.

So, if I am right then to create conscious human level AI one really needs to essentially copy the God-given cognitive technology we already have to hand and which is open to close scrutiny - namely, human (and animal) cognition. But if we did this and created human level AI it would really no longer classify as AI; it wouldn't be just a silicon-chip copy of the formal structure of human cognition, but also a copy of the material qualities of human cognition and therefore classify as RI - that is Real Intelligence.  Dembski himself may also be thinking along these lines when he says this:

Essentially, to resolve AI’s homunculus problem, strong AI supporters would need to come up with a radically new approach to programming (perhaps building machines in analogy with humans in some form of machine embryological development).

So, if and when we succeed in building human level AI we would have come full circle - we would effectively have manufactured a biological intelligence and reinvented the wheel. Yes, mimicking the right formal structures of Real Intelligence is a necessary condition for RI, but not a sufficient condition. For RI also requires those formal structures to be made of the right material qualities, qualities supplied by no lesser than the Almighty Himself. Matter, so called "natural" matter, is in fact a miracle - in a sense it is "supernatural" and therefore it is no surprise that it has some remarkable properties; such as the ability to support conscious cognition.

OK so let's assume that in the long run at least, we learn how to create beings of conscious cognition. Is it possible that they may have an intelligence that far exceeds our own?  Perhaps, but then there may be limits to the generic problem solving cognition which makes use of the item-property connectionist model (The IPC model). After all, for the IPC model to work as a problem solver it must first compile reality into the general cognitive language of the association network; this can only be done via a lengthy process of experience and learning. Like our selves an IPC "machine" would have to go through this learning process. Now, a learning process is only as good as the epistemology our world supports and only as good as the data that that world feeds learning. For example, a completely isolated super-intelligent IPC machine would learn very little and remain a super-intelligent idiot. Thus our budding super-intelligence can only learn as fast as the external world supplies data and that may not be very fast. Moreover, the data may have biases, misrepresentations and errors in it. To deal with this the super-intelligent IPC machine may have built into it some seat-of-the-pants hit-and-miss heuristics that in the wrong circumstances come out as spectacularly inappropriate biases. This all sounds very human to me. And while we are on the subject of the human, it may be that in our IPC machine we have to duplicate other advantageous human traits such as social motivations and community heuristics which would allow an IPC machine to benefit from community experience. But that comes with trade-offs as well: In particular, limits inherent in the data supplied by the community: Outsourcing cognitive processing to other members of a community comes with hazards: Perhaps our gregarious super intelligent machine may become a young earthist, Jehovah's witness or a flat earther! The take-home lesson here is that the epistemics of our world is such that a complex adaptive system like an IPC machine will necessarily have to engage in compromises and trade-offs inherent in seat-of-the-pants heuristics.

Another thing to bear in mind is that intelligent performance is a multidimensional quantity: There are different kinds of intelligence.  Some individuals can exceed in task X and yet be poor in task Y. This may be because it is not logically possible to be both good at X and Y; these tasks may have a logical structure which entails a conflict.

In summary: It may well be that our world puts limits on how intelligent an IPC machine can be.  In any case an IPC machine is likely to be slow because it takes time to render the world in terms of its highly generic associative language. And its store of knowledge will only develop as fast as learning allows; for this learning may be hamstrung by the rate at which our world supplies data.

***

If I may indulge in a little speculative futurology then my guess is that whilst successful human engineered RI  is possible (and will likely have the touch and feel of human thinking) there will be no AI take over. Rather what will happen is that humans (and perhaps human engineered IPCMs) will become more and more integrated with an extensive toolbox of specialized conventional AI applications.

Human intelligence (and perhaps even engineered IPCMs) will become the humunculus that Dembski talks of; in short AI will serve as an increasingly integrated extension of the human mind.  In many ways this is already happening. The average human who has access to a web connected desk top computer or an iPhone has at his disposal potentially huge information and AI resources. And maybe one day web connectivity will be directly connected to the brain surgically!  Web apps will do slave tasks that IPC cognition finds difficult because of the processing overheads inherent in the need to translate to the general IPC language.  But IPC will remain the core intelligence, effectively the managing humunculus spider in the middle of the web. If that "spider" happens to be a human engineered IPCM then I feel that necessarily it will display very human personal foibles and limitations as one would expect from what would essentially be a complex adaptive system placed in a world whose epistemic interface is such that it does always provide easy interpretation.

As far as processing and information resources are concerned the person connected to the internet has huge advantages over the disconnected mind; so much so that the connected person is effectively a genius in comparison!  For example, I don't use an iPhone and this means that when I'm out and about the bright teenager who walks past me in the street and who is a smart operator of his iPhone is, in comparison to myself, a super intelligent AI extended being! When he isn't viewing porn or playing computer games he can far exceed anything I can do intellectually! In one sense that AI super intelligence has arrived!

Tuesday, June 12, 2018

The Thinknet Project Part 4



The fourth part of my Thinknet Project can be downloaded from here. Below I reproduce section 11, a section which is about Intelligent Design a subject which is very relevant to this blog as I have posted so much on this contentious question.

 The other parts to this series can be picked up here:

http://quantumnonlinearity.blogspot.com/2015/12/thinknet-project-articles.html


***


11.  A note on Intelligent Design, specified complexity and information creation.

As we saw in the last section Thinknet is way of seeking and selecting the improbable, given certain input specifications, specifications realized as consciously stimulated patterns.  As with an internet search a Thinknet search has the potential for returning rare cases and this equates to improbable cases; that is, cases of high information

The de facto Intelligent Design community often talk about specified complexity and the impossibility of “natural processes” (sic) creating information, thereby implying that such are only available to an intelligent process. It is not always clear just what the de facto IDists mean by an intelligent process and by specified complexity. Also, it is not clear why “natural processes” (which for the Christian are processes created, sustained and managed by a “supernatural” God, so they are hardly “natural”) can’t create information; after all human brains presumably classify as “natural processes” and yet they seem to be able to create information. 

As we have seen in my Melancholia I project so-called “natural processes” can create information, especially if they have an exponentially expanding parallelism. The reason why macroscopic “natural processes” appear not to be capable of creating information is because:

a) They don’t often have this expanding parallelism and therefore generate information only slowly with the logarithm of time.
b) They don’t often have a teleological selection structure which clears away generated non-targeted outcomes. Therefore “natural processes” appear not leave the high information targets conspicuously selected.

Thinknet on the other hand, has both of these features. Viz:

a) An exponentially expanding parallelism is required to search the complex network of associations.
b) Thinknet has a built-in teleology which leads to the clearing away of outcomes which do not meet the target criteria.

If Thinknet is an indication of the fundamentals of human cognition then it follows that the human mind is a natural process which conspicuously creates information and targets it. This is not say that what we classify as non-intelligent processes don’t create information; as I have said above, they do, but not conspicuously because in many non-intelligent processes information is only generated in slow logarithmic time and also without selection. The latter in particular may explain why atheist world views tend to have a preference for information symmetry; the many worlds interpretation of quantum mechanics does entail exponentially expanding parallelism but it lacks asymmetrical selection of information, a trait which smacks of a teleologically interested intelligent process; the latter would of course be unacceptable to many atheists.

The Thinknet simulation does, however, throw some light on de-facto ID’s so called “specified complexity”.  In my simple Thinknet simulations two stimulated input patterns A and B are used to specify a sought for outcome in a similar way to an internet search. As we have seen, symbolically this can be expressed as:

[AB] => C
21.0

Input patterns such as A and B have zero information as they are from the outset known objects; that is, they have no Shannon “surprisal” value expressed by an improbability. But from the outset C is an unknown and in fact may be a member of very small class of objects which fulfill the conditional specifications A & B. Thus, C may have a high improbability implying that it is a high information object. The computational complexity of the outcome C is implicit in the symbols “[ ] => ”. These symbols represent the search needed to arrive at C. Since a Thinknet network presents a search problem whose computational complexity is an exponential function of the network penetration depth, then the operation symbolized by [ ]=> may embody a high computational complexity. If this high computational complexity is actually the case then we can say that C has a high specified complexity with respect to the specifications A & B.