This lot seldom agree to disagree.
Quoting from the wiki entry on Aumann's agreement theorem:
Aumann's
agreement theorem says that two people acting rationally (in a certain precise
sense) and with common knowledge of each other's beliefs cannot agree to
disagree. More specifically, if two people are genuine Bayesian rationalists
with common priors, and if they each have common knowledge of their individual
posterior probabilities, then their posteriors must be equal.[1] This theorem
holds even if the people's individual posteriors are based on different
observed information about the world. Simply knowing that another agent
observed some information and came to their respective conclusion will force
each to revise their beliefs, resulting eventually in total agreement on the
correct posterior. Thus, two rational Bayesian agents with the same priors and
who know each other's posteriors will have to agree.
Studying
the same issue from a different perspective, a research paper by Ziv Hellman
considers what happens if priors are not common. The paper presents a way to
measure how distant priors are from being common. If this distance is ε then, under
common knowledge, disagreement on events is always bounded from above by ε.
When ε goes to zero, Aumann's original agreement theorem is recapitulated.[4]
In a 2013 paper, Joseph Halpern and Willemien Kets argued that "players
can agree to disagree in the presence of ambiguity, even if there is a common
prior, but that allowing for ambiguity is more restrictive than assuming
heterogeneous priors."
The big question here is, of course, why the model on which this theorem is based doesn't describe the real world. It reminds me of Olbers' paradox: From fairly basic mathematics Olbers "showed" that the night sky should be bright with star light, but because it isn't this leads to profound questions about the nature of the universe. In that sense Olbers' work was a stroke of genius. Aumman's theorem may have a similar status. Both analyses deal with signalling and both draw conclusions from this signalling using models with inbuilt but self-aware assumptions. Hence, both prompt profound questions about why their respective models don't quite capture reality: where have they gone wrong?
On one level the theorem is common sense: The average (wo)man, whatever they might say when in a philosophical mood, tends not to be a relativist and actually has a working belief that there is such a thing as a true picture of reality to be aimed at and applicable to all interlocutors. Surely then in open, transparent, fair and unprejudiced dialogue where the evidences are equally apparent to all and where everyone is being meticulously logical, this true reality can, given sufficient discussion time, be agreed open. Thus any one with common sense knows that in theory we should be able to agree to agree, because there is a truth out there to be agreed upon. But real life isn't that simple. Real world epistemics ensures that. If you are not a relativist (like myself) then you know that "Agreeing to disagree" is an unsatisfactory state of affairs, but nevertheless epistemic practicalities cut across the ideal and often precludes final agreement.
As the Wiki entry implies Aumann's theorem can be questioned on several of its assumptions: Are humans Bayesian rationalists? If not why not? Do people act rationally? If not why not? Do they share common knowledge of priors? (which seems to be Hellman's tack). Is there an arbitrariness about evidences? Other questions can also be raised: Are the communication channels non-corrupting? Do human social and relational factors have a bearing? Is it always an adaptive heuristic for a complex adaptive organism like a human to follow the truth? (In particular it might be safer to follow the crowd!) Can all the signals we get from our environment be shared by all?
In a Facebook discussion group I attempted to get a handle on just one of the factors which may scupper Aumman's model. In this discussion I had in mind some concerns I had about the extreme left-wing - in particular the "Socialist Workers Party" who I had contact with in the 1980s and who were as assured of their position as are the Christian fundamentalists: Significantly, like the Christian fundamentalists, their arguments entailed a strong moral and group identification components. Moreover, at times their class warfare theory sounded a bit like conspiracy theorism.
Here are my recent Facebook notes:
Here are my recent Facebook notes:
Idealists have minimal doubts. They are on a mission, a mission
usually tied to a group think and – this is the killer – a morality which
suggests defection is an immoral act in the face of what the group will claim
is “obviously true”, the plain reading of reality. Ergo, they have no sense of
epistemic distance & doubt and believe there is no room to agree to disagree;
defectors, apostates and detractors are all immoral, to be condemned in the
strongest possible terms. If anyone defects the defector will have a nagging sense of
betrayal and of cutting across moral standards. This ethos is sufficient to
keep many people in place who might otherwise have doubts.
This
general picture was certainly true of the socialist workers I met in the 1980s:
The notion that one is betraying one’s class if one doesn’t subscribe to
Marxist theory was very prevalent. And of course betrayal is regarded as the
lowest form of immorality, so the pressures are tremendous.
As you
can probably see most of the above comments port readily to religious groups.
One more thing: If you do have strong doubts whether of fundamentalism
or Marxism there are strong group & moral pressures which encourage
self-examination of one's doubts; one is encouraged to believe one has made
some kind of mistake in one's thinking if it goes against the "manifest
truth" of the group think.
What worries me is that we are seeing some hints of this process in
the Corbyn camp. No surprise the SWP have signed up to him
People well immersed in the group-think genuinely believe it I think;
any difficulties are waved away with the thought that the clever members of the group have
got it all in hand. In short the authenticity of the group is effectively
posited as an axiom in one's thinking. Given all that is at stake when one has thrown one's lot into kind of epistemological protection racket, this behavior of giving group-think an
axiomatic status may have adaptive value and it needs to be factored in when
talking to committed members of groups.
Basically what I'm saying here is that one's social identifications have a big influence on one's world view. Social links take a lot of time and mental resources to form and therefore if one's cultural identifications are effectively being posited as an axiom in one's thinking it's no surprise if people who identify with different communities can't agree even over protracted periods of discussion.
Basically what I'm saying here is that one's social identifications have a big influence on one's world view. Social links take a lot of time and mental resources to form and therefore if one's cultural identifications are effectively being posited as an axiom in one's thinking it's no surprise if people who identify with different communities can't agree even over protracted periods of discussion.
No comments:
Post a Comment