Pages

Tuesday, November 07, 2017

Ontological Reductionism vs Mathematical Reductionism




On 23rd October I published a blog post with the above title. This post was an analysis of a draft paper co-authored by Jonathan Kopel promoting the concept of "Relational Ontology" and contrasted it over and against "Reductionism". Since then the paper has been updated and has a different author list. It now awaits publication. Jonathan asked if I could remove my original post as it pertained to the now defunct draft paper. This I have done. However, as my original post contained content which can stand without the paper, then in this current post I reproduce this content. Moreover, when Jonathan's paper is eventually published I will review it once again and I am likely to make reference to the material that follows.

***

Firstly, then, what is reductionism?  When used in the context of the physical sciences “reductionism” is usually understood to be the contention that somehow every aspect of reality "reduces to" or is fully "explained" by the motions, properties and interactions of fundamental particles. This “reductionist” paradigm might also be characterized as “mechanistic” in as much as its only recognized qualities are defined by a purely geometrical dynamic. Reductionism in its strongest and most emphatic manifestations regards all other qualities as illusory.  Notice however that here a) the concept of “explanation” is not defined and b) this sketch of reductionism actually conflates two forms of reductionism. Viz:

A1: Ontological reductionism. This is the assumption that particulate physics constitutes some kind of ultimate reality to which all else “reduces”, whatever that means. In comparison with this particulate world the world we actually perceive is thought of as all but illusory. This strong view  of reductionism may also be characterized as “elemental materialism”.
A2: Mathematical reductionism. This is more a dream than an assumption: It is hoped there is a final succinct (mathematical) theory out there waiting for us which is general enough to fully describe the ontology assumed in A1. It is then hoped that in combination with A1 the final theory, as its name suggests, will explain everything. The increasing generality of our current theoretical narratives is usually taken as lending support for this hope and, moreover, may be offered as a reason to believe in A1.

Now, as a physicist I’m quite a fan of A2 (but not a dogmatic fan). Although we can’t be sure about A2, it is, nevertheless, at least an intelligible proposition. In contrast I regard A1 as either an incoherent philosophical bias, often adhered to without self-awareness, or a pictorial myth on which we can hang our thoughts and give some recognisable human visualisation to our mathematical formalisms (I find nothing wrong with the latter). But it is far from clear just what it means to assert that reality somehow “reduces” to elementary particulate components. Hence, although I can tentatively accept the mathematical reductionism of A2 I do not accept the elemental ontological reductionism of A1, an idea I find unintelligible.

Reductionism is not the only term whose meaning is unclear in this context: After all, what does it mean to "explain something"?  But unlike "ontological reductionism" the concept of "explanation"  can, I believe, be given a clearer meaning. If we take it that “explaining” is essentially a mathematical activity which simply provides some kind of successful mathematical description of the patterns displayed by the physical world then this concept of explanation stands a chance of being defined with some precision - See footnote *1. But a successful mathematical description in and of itself doesn’t entail the need to imbue that mathematics with fundamental ontological significance, a significance against which all other perceptions and conceptions are thought to be “illusory”. Imputing some kind of elemental fundamental ontology to our formal mathematical terms amounts to an interpretation of the meaning of those terms. Such would classify as a  world view synthesis, a world view which amounts to a metaphysical belief as to the actual essence and true meaning of the world. Such a belief, which is essentially the content of A1, is highly contentious of course. The upshot, however, is that when A1 and A2 are conflated the theoretical narratives of physics become inseparably bound up with the contentious elemental reductionism of A1.

Scientific narratives emerge out of the interplay between experience and the trial theoretical constructions we tender in order to make sense of experience. It is a kind dot joining game where the “dots” are our experiences and the structures joining them are our theoretical conceptions. The scientific narrative is written in the language of the third person. But this language hides the truism that all science traces back to the experiences and theoretical constructions of a first person somewhere; the “dots” belong to a first person experience and the theoretical constructions which attempt to unify our patterns of experience are reified in a first person cognition. The third person account is framed, though, as to give the impression that it refers to some reality beyond cognition. This projected "reality" is Kant’s “thing-in-itself”, an object which is really only mediated through our cognition and therefore in an absolute sense inaccessible. I wouldn’t want to go as far as positing a kind of “cognitive positivism” by asserting that human cognition is the ultimate reality or somehow primary rather than secondary; that would be as bold a conceit as A1. However, it seems that ostensibly we have a tenuous grasp on the exact nature of this “thing-in-itself-reality” and don’t really know what it means. In contrast, as Kant suggested, we do have inside knowledge of what it means to be a first person reality whose perspective on the world is mediated through experience and theory (and perhaps ultimately this understanding may throw light on the very essence of all reality). Our first person theoretical perspectives may or may not resemble some ontology beyond cognition; for particles, strings, waves, fields, spaces or whatever, may be just an expression of the way we are cognitively equipped to think about the thing-in-itself ontology. Although we have no absolute connection with the “elemental materialism” posited in A1, we nevertheless have a good registration between many of our theoretical constructs and the  “dots” of our experience; that is, experience and theory cohere and especially so when theory successfully predicts experience. Although this coherence is very suggestive of the existence of an ontology beyond our cognition, the registration between theory and experience doesn’t reveal the “thing-in-itself-ness” of that ontology.  


***

Because ontological and mathematical reductionism are easy to conflate we find that anti-reductionists may look unfavorably on idea of the potential completeness of mathematical physics and thus perceive their mission to be one of trying to throw doubt on the mathematical efficacy of physics.  What may give power to the elbow of those who are tempted by this road is that even if our theories are complete their mathematical intractability can scupper any idea that they give easy prognostications about the physical world. For example, even if we assume that our current quantum equations capture everything about fundamental particle interactions (which of course they may not) the mathematical consequences of even these equations, when applied to many particles, remain staggeringly complex and it is likely that multi-particle quantum mechanics is computationally irreducible; that is, only the system itself can simulate itself; there are no analytical mathematical short cuts beyond running the particle system in real time in order to exactly compute the outcome; the system must be its own computer.  In the light of this it is not particularly startling that so called chemical bonds come in many different types and that these types consist of classes with fuzzy blended boundaries. 

In any case science has so often depended on approximate, sometimes  almost “toy town” models, simply because reality is too complex to take into account every impinging factor. These "toy town" models may even be simplifications of our own enunciated equations simply because those equations may be too difficult to solve analytically. Moreover, given that “isolated systems” are more often than not idealizations, then it is no doubt one of those toy town models to approximate a chemical structure as a configuration of static elements. For example, I don't think anyone thinks those models of molecules showing well defined coloured polystyrene spheres connected by tooth picks (i.e. bonds) are very literal – one can think of them to be on a par with a child’s stick man representation of a human being!

Even on the basis of quantum mechanics as it currently stands it is clear that our simplifying models are highly caricatured representations of the consequences of this mechanics. When using such “caricatures” to describe molecular dynamics it is probably prudent to use several completing metaphorical models which only when taken together maximise the sense making power of what is in fact a very human narrative. But really all this is very much business as usual as far as science is concerned, a business where our computational idealizations and compromises should not be taken too literally. As I’ve already said even our current equations, when used to treat multi-particle scenarios, are likely to be computationally irreducible. This means that only models employing simplifying approximations are going to be analytically tractable to limited human computational resources. Our cartoonish depictions of molecular dynamics are just the way science has always worked and will continue to work. Given what are clearly human limitations (which is the natural state of human affairs) there is no reason to throw up our hands and declare that somehow all this portends a broken scientific paradigm which needs to be discarded in favour of a new paradigm. For example, I would suggest that so-called Relational Ontology is not going to bring a fundamental revolution in science, but only an enhanced philosophical appreciation that the objects science deals with are necessarily relational in nature. A similar "radical" paradigm which is unlikely to usher in anything more than an enhanced philosophical understanding is that of "contextual emergence" an idea which Robert C. Bishop describes as follows: 

Contextual emergence is the circumstance where domain A provides necessary conditions for the description or existence of elements of domain B, but lacks sufficient conditions for the description or existence of elements of domain B. This is to say that the sufficient conditions necessary to complete a set of jointly necessary and sufficient conditions for the description or existence of elements of domain B cannot be obtained from domain A alone. Information from domain B—a new context—is crucially needed 

As far as I'm concerned this is old news. For if I understand this passage correctly then “contextual emergence" even becomes apparent in something as elementary as a binary sequence. We readily talk about a “binary bit”, but in isolation the binary bit is in fact an incoherent object: A binary bit is only a binary bit by virtue of it being within a sequential context – that is, its reality is necessarily mediated via its relation to other binary bits. But the sequence itself is a meaningless concept if regarded in isolation to the bits it contains; take away the bits and you’ve got no sequence. We have here a kind of circular mutual dependence: A bit doesn’t have a reality apart from its sequential context and the sequence doesn’t have a reality apart from its component bits. The concept of a sequence provides the necessary conditions for defining the existence of a bit, but not sufficient conditions for the description of a bit because the bit has its own degree of freedom of 1 or 0. Conversely, the sequence is not an entity independent of the bits because the state of each bit is a necessary condition for a description of the sequence. I say, again what’s new here?

Taking this further with an object a little more sophisticated than a binary sequence here’s what I wrote as an end note in my annotation of Jonathan''s draft paper:

Relational ontology (RO) acknowledges that things have both intrinsic and extrinsic properties. Take for example a cat: We could isolate it in weightless low cryogenic vacuum for a short while in order to study its intrinsic particle configuration, but that would only yield half the story. The concept of a cat only makes complete sense in relation to its environment and how it uses that environment (e.g. territory defence, nocturnal hunting of prey, reacting to human owners etc). Thus being a “cat” entails a huge burden of extrinsic properties (or relations) without which “being a cat” is rather meaningless. This sort of reasoning applies to most objects we can think of.

I also touch on this matter in my essay “The Great Plan”.

RO might encourage an enhanced philosophical understanding on the relational aspects of descriptive science but it doesn’t advance the scientific understanding of chemistry; only more finely honed theoretical models will do that.

***

As I have already said we need to appreciate the different possible meanings of the term “explanation” with its sensitivity to one’s world view, particularly if ontological reductionism  and mathematical reductionism have been conflated. This appreciation will prevent putting ourselves on a collision course with the day by day business of the physical sciences. This day-to-day business inevitably employs simplified models, a mix of metaphors, idealisations and isolation approximations. Moreover, it is conceivable (although by no means certain) that one day the theoretical scientific account, in a purely formal mathematical sense, will be complete in terms of its descriptive power. But if that juncture should ever be arrived at it still wouldn’t follow that ontological reductionism is the logical conclusion. Ontological reductionism is just another comparative guess and/or belief as to the true nature of the ultimate thing-in-itself. In the meantime whether we understand Relational Ontology or not, the same issues revolving round questions of interactive isolation and computational irreducibility will remain very much the stuff of routine science and will continue to be respectively addressed using iterative methods*2 and a cluster of metaphorical simplifications/approximations. 

The mathematical objects of science are philosophically meaningless and unintelligible without positing the cognating first person who is both an experiencer and theoretical narrative constructor. In the introduction of my book Gravity and Quantum Non-Linearity I note this fact especially alongside the advances of neural science which is developing a third person account of human beings in terms of neurons, electric fields, chemistry, molecules and what-have-you. Hence, we have here the first person constructing a third person theoretical narrative which is then used to explain the first person perspective. This is in fact a self-referencing loop; theories conceived and tested by a first person are then used to explain the first person perspective. I liken this self-reference to the practice in computer software where C++ programs can be compiled using a compiler written in C++. That is C++ is defined in terms of C++.  Likewise, human beings have constructed theoretical concepts which can be used to give account of the human theory constructor. Thus, to attempt to do away with either the theoretical objects of science or the first person perspective is to do a violence to the other: Without the first person perspective of conscious cognition the so-called “atoms and the void” posited and understood by the first person is an unintelligible concept. But when we look closely at conscious cognition from a third person perspective all we find is “atoms and the void”. But only the first person knows just what inner conscious qualities are entailed by the apparently colourless third person world of the "atoms and the void". So, it is only when the two perspectives of the first and third persons are taken together that things start to make sense.


Footnotes

*1 On Mathematical Description
In my book Disorder and Randomness I look into the limits of theoretical narratives and their ability to “explain” in the descriptive sense of the word. For the purposes of the book I define “explanation” as the ability of small space short time algorithms to generate patterns of observations, patterns which we assume can be represented by binary sequences as per a digital computer simulation. Absolute randomness is then defined as those sequences which are unreachable using small space short time algorithms. The point to be made from this is that “explanation” in this sense is clearly a very human perspective; that is, it depends on the meaning of what we consider to be small and short; or in a single word, 'succinct'. The meaning of small and short is relative to our humanity; if we allow an indefinite extension to small and short then the sky is the limit in terms of explanation; complex and/or long running algorithms can conceivably explain (i.e. describe) any possible sequence and the concept of what is truly random then goes out of the window with it: either that or the concept of “explanation” becomes a trivialism. If one can store enough information or allow an algorithm to run for long enough it is very likely that anything could be “explained” and this renders rather redundant the meaning of "explanation" in the descriptive human sense.

*2 On Iterative Methods
Now, it’s a well-known cliché that in the final analysis everything is connected with everything else; complete and ideal isolation of any cosmic sub-system from everything else is just not absolutely possible. So, how then can we get an analytical scientific handle on the universe when strictly we should take into account the whole cosmic caboodle in our investigations of a subsystem? After all, in an absolute sense any subsystem we probe may be affected in complex and unknown ways by its environment, thus rendering any claim to having discovered a law which applies to the subsystem potentially invalid. The answer to this conundrum is experimental iteration.

To implement experimental iteration we start by making a crude judgement on what we think is needed to approximately isolate a system and then go about studying its patterns of behaviour. We thus can derive at least approximate analytical laws governing the universe. From there we can then use our first pass theories to get a better understanding of how to isolate subsystems. We can now start over again the process of subsystem experimentation and thus refine our theory, which in turn assists in the creation of better isolated subsystems thus further helping to hone our theories. The hope is, and it is only a hope, that some kind of convergence takes place toward better theoretical narratives.

This iterative process has some resemblance to Newton’s method of finding solutions to equation and like Newton’s method it has to start out by assuming that the epistemic chances are stacked in our favour; namely, that we have initial hunches and heuristics which will ultimately lead us in the right direction. This is not such a problem for theists who see epistemic providence at work in science. But it is not a game faithless nihilists will feel secure with! Ironically neither will Christian fundamentalists naturally take to this process because they are forever trying to throw doubt on the assumptions of rationality which are inevitably needed to make science work in the first place. For example, fundamentalists Jason Lisle and John Byle are cases in point. 

No comments:

Post a Comment