Pages

Thursday, August 14, 2014

Melencolia I Part 3: Sharpening the Focus.


The previous parts of this series can be seen here:

I was interested to see this blog post by evangelical atheist Larry Moran where he refers to a post on the IDist web site Uncommon Descent by a Gordon E. Mullings (aka “Kariosfocus”). In the referenced post Mullings publishes a quote from an IDist who bemoans the fact that the academic establishment simply hasn’t been moved by what to him is the clear and convincing case for his version of intelligent design.  As often happens with the members of a  somewhat beleaguered and marginalized subculture he just can’t believe that this stone walling is down to a dispassionate intellectual rebuttal but rather a wilful ulterior-motive laden rejection of what to him is oh so obvious. So, all too typically we find that Mullings’ quotation puts it down to a “heart problem” rather than a “head problem”. The intended hint here is that those who continue in an informed rejection of Mullings’ version of ID probably do so with bad consciences if not black hearts. This tendency to believe in the depravity of one’s detractors, as I have pointed out on my blogs, is a background attitude which provides fertile ground for conspiracy theorism, should it take root. But to be fair, Mullings and his friends have often been unpleasantly abused, although frankly I don’t think that some of Mullings’ somewhat paranoid denunciations have helped calm things down.

For my own part I find it difficult to take sides here: Polarization has killed off dispassionate debate. Many atheists are probably thoroughly annoyed by the North American IDist’s claim that they have scientific authority to push their case for God via Dembski’s explanatory filter (*1). But conversely atheists also attempt to use scientific authority to rule theism out of court (See here). In any case Mullings often fails to do justice to himself with his paranoid behavior and I doubt there is much respect for all his technical efforts – these are just brushed away by evangelical atheists without thoughtful engagement as just so much window dressing and laughable sophistry. But there is in my opinion a deep flaw in the kind of IDist thinking Mullings stands for: Viz: He talks about the computational resources of the cosmos being incapable of locating life using a “blind search”, but he appears not to consider the obvious solution that the cosmos may have sufficient ongoing immanent divine resourcing to presume it to be a blind search; In fact the Wiki entry that comments on Dembski’s “Universal Probability Bound”, (an idea enthusiastically promoted by Mullings) spots the way out:

Dembski appeals to cryptographic practice in support of the concept of the universal probability bound, noting that cryptographers have sometimes compared the security of encryption algorithms against brute force attacks by the likelihood of success of an adversary utilizing computational resources bounded by very large physical constraints. An example of such a constraint might be obtained for example, by assuming that every atom in the known universe is a computer of a certain type and these computers are running through and testing every possible key. However, universal measures of security are used much less frequently than asymptotic ones.[6] The fact that a keyspace is very large is useless if the cryptographic algorithm used has vulnerabilities which make it susceptible to other kinds of attacks.

For a theist like myself the weakness in Mullings' argument is that divine intelligence is, presumably, well able to express itself in a much better than “blind” search. Like many IDists of his persuasion Mullings gives every impression of being against the idea that the cosmos has the wherewithal to “naturally” generate life. This is, I believe, an outcome of the polarised state of the debate where respective sides have tacitly taken onboard a “God vs Nature” dichotomy along with the implicit conclusion that it’s an exclusive choice between “natural forces” and God. This dichotomy is reinforced by Dembski’s explanatory filter which necessarily imposes a sharp distinction between “natural causes” and “intelligent agency”.

It is ironic that my own position is probably nearer that of the atheists Viz: That the cosmos has generated life …. although I propose that the cosmos has generated life because it is providentially resourced to do so. Where I differ from these atheists is that I see the generation of life as a remarkable non-trivial property of the cosmos. In fact Mullings and myself probably have common ground in agreeing that any  “law and disorder” description of the cosmos always leaves us with irreducibly startling features; we can never trivialize those features away with law and disorder explanations. The ontology of law and disorder posited by the physical sciences is, in the final analysis, complete when full description is reached; attempts to push it any further leads to a regression of nested explanatory contexts and this is mathematically akin to attempts to compress an already compressed data set; further attempts at compression gain no further reduction in the data string. The highly unrepresentative configurations of life are startling and remarkable, but attempts to explain them with law and disorder scenarios leads to logical conditions that are no less startling and remarkable. As Sir Patrick Moore once said: Our science is strong on detail but weak on fundamentals. We have to seek a very special kind of explanatory narrative if we are to stop this regress: An extraordinary universe requires an extraordinary explanation. But that’s another story I won’t talk of here; let’s leave theology out of it for the moment and just stick to physical science.

The bald fact is that Mullings and his colleagues have failed to get their message across; I put that down in part to their framing of the question of life with what appears to be a dualist “God did it vs Nature did it” dichotomy. This dualism is a gift to evangelical atheists who understandably are prompted to attack the weakness in this dualist IDism; namely, that there is little evidence for a "God-of-the-gaps-did-it" view: For it is clear that whether Mullings likes it or not, intended it or not,  at  least one atheist sees Mullings' case as a "God of the gaps" model. As Larry Moran says in his post:

There is no evidence for the existence of a creator who meddles in the affairs of living organisms.

(This statement, by the way, betrays a misunderstanding of how evidence is actually used; see here for details). If this “interfering” deity is what Mullings and his colleagues are really promoting – and they give every impression of doing so – then it is no wonder that the academic establishment, Christian and atheist alike, takes such a dim view of their efforts: While I personally don’t rule out the anomalous and the miraculous the fact is such things are few and far between by their very definition and amount to positing an erratic ontology that makes it impossible to provide high standards of evidence for; scientific epistemology has great difficulty with an ontology of erratics. So, if as the North American IDists appear to claim life was generated by the one-off interferences of an eminent intelligence, an intelligence which could well be little green or gray men inside the cosmos, then that is going to generate a very soft science, a kind of super-attenuated archeology in fact.

In order to back up his case for a “meddling” deity Mullings posts this graphic panel:

This graphic by Gordon E Mullings perpetuates the North American "Wow!" signal argument for an interfering Intelligence eminent to the processes of nature. (Click to enlarge)

Living structures if described in terms of their self-maintaining activities can be relatively simply defined without, as Mullings rightly says, specifying all the coordinates of their configuration. Moreover, compared to all the imaginable possible atomic arrangements it is clear that this class of self-maintaining structures, even the simplest of them, is negligibly small and so on a purely chance selection no member of this set would likely be found even given 10150 trials! But the trouble with the argument here is that Mullings doesn’t define what he means be “all relevant possibilities” and this vague reference hides a big issue.

To see this let’s take for example a construction set like Lego. Clearly the total possible ways of arranging the bricks in a big Lego set is a huge number and utterly dwarfs the set of constructions that do “something useful”.  But there is a big difference between the total number of conceivable arrangements of Lego bricks and the number of arrangements that come to light should we start moving around the Lego bricks according to some constraining developmental regime, perhaps with aim of searching for “something useful”. These two sorts of “relevant possibilities”, namely the total conceivable possibilities and the possibilities that come to light as a result of constrained change, are very different in character. Ostensibly it might seem that any configuration of atoms is possible and violates no physical law, but - and this is the big “but” -  physical law is also about change and given the regime of change in our cosmos it is clear than not all configurations pop up in a given time; so in the developmental sense the “relevant possibilities” are very different from the conceivable “relevant possibilities”.

If Mullings had taken this distinction onboard I suspect his panel would be very different in content. In fact I have yet to see any serious considerations by his IDist subculture of how physical constraints enhance the search for organic forms; when they have considered this topic they usually think terms of the “dynamic fallacy” whereby they believe the problem is to counter suggestions that the organic biopolymers are somehow coded directly into the physical regime. Better perhaps is their concept of Irreducible Complexity, but even that they have managed to screw up on. But if rightly defined irreducible complexity would, however, be a big evolution stopper, and Mullings and co would then have the last laugh. However, there is one other big issue I have with Mullings which may make all this irrelevant. This issue is Mullings promotion of Dembski’s probability bound; namely, that the cosmos has an upper limit of only being able to locate at best a 1 in 10 150 instance.

I would propose that potentially the cosmos has a much, much greater power of searching out rare cases than Dembki’s probability bound suggests and this follows from certain quantum mechanical considerations.  We can begin to appreciate this by first looking at the classical analogy of quantum mechanics – namely, elementary diffusion. One dimensional diffusion can be simulated using a system of nodes laid out on the x-coordinate. If the diffusion distribution is represented by Y(x) then the system of nodes can simulate the diffusion simply by signaling one another with wY(x) where x is the position of a node and w is the analogue of the step probability in random walk. The result is a simulation of the elementary diffusion equation:

Equation 1

 Where e is the distance between nodes and v is the velocity of the signal. (More detail about this topic can be read in my private publication “Gravity and Quantum Non-Linearity”)
The number of nodes working in parallel to simulate this equation is potentially proportional to L/e where L is the length of the x coordinate.  But this is just for one dimension. If we have N particles then the diffusion distribution is represented by Y(x1,…xjxN), a quantity defined over a space of N dimensions. The appropriate diffusion equation is then:
Equation 2

…where for simplicity I have assumed that e, w, and v are independent of coordinate, although in general this is not the case.

If for the sake of argument we assume that each coordinate has length L then the total number of nodes will be (L/e)N. These nodes effectively act as our processors and so potentially the simulation could utilize the power of (L/e)N nodes. If N ~ number of particles in the observable cosmos we can see straight away that the number of processors is so huge that potentially it will return a processing power far in excess of the strictures of Dembski’s probability bound; this bound only raises 10 to the power of a 3 digit number, whereas in (L/e)N, the power N, assuming a figure similar to the number of elementary particles in the observable universe, is far in excess of 3 digits.

However our simulation isn't doing anything useful. In order to make it build things we need to add a potential term, V, thus:
Equation 3

Here I have taken the quantity e inside the summation and made it a variable that depends on coordinate, but this will not substantially change the enormous number of processors involved. The significance of this change will become clearer shortly.

The potential terms V(x1….xN)Y is way of “pumping in” signal in some regions but extracting it in other regions in order to conserve the distribution for correct normalization. This term, as we know, can be arranged so that there is a tendency for particles to stick together, thereby acting much like the attachments of some kind of construction set. If so, then this will mean that the distribution has tendency to accumulate around certain coordinates values representing the case where particles coagulate. To be sure however, this accumulation of the distribution would be very small if the space is very large and the size of the particles in terms of their surrounding potentials are small; this is because under these conditions there is a very small chance that the particles will find such configurations. But here’s the crucial point: These configurations will effectively be flagged and marked by a raised distribution at those points.

We can considerably enhance this simulation in favour of organized configurations by turning equation 3 into its quantum mechanical equivalent (See again Gravity and Quantum Non-Linearity). This is easily done by simply replacing the real signal wY of ordinary diffusion with a complex number signaling of form iwY thus giving us the multidimensional Schrodinger equation: (*2)

Equation 4

…where w, v and ei are adjustable constants that can be set to give us the usual quantum mechanical factors of Planck’s constant and mass: Notice, however, that the factor ei which I placed inside the summation can now be seen as a way of simulating different particle masses.
The reason why this equation is a huge improvement over ordinary real diffusion is that the wave nature of the resulting solutions has the effect of cancelling out (when normalized) huge fields of bland randomness, thereby enhancing the relative presence of those peaks of coherent and organized structures. In short quantum signaling is way of insuring that organization and coherence become disproportionality represented. This is just the sort of constraint we need to help enhance the search for life.

So, the points I would like to leave us with at this stage are:

ONE: The quantum mechanical processing potentially involves huge numbers of processors
TWO: Bland fields of randomness are canceled out.
THREE: Organized and highly coherent structures are flagged with raised distributions.

But there is a little problem:  In the Wiki entry about Dembski’s probability bound of 10150 we read that Seth Lloyd has come up with a bound which has a power of similar order of magnitude as that of Dembski. Lloyd and Dembski aren’t sloppy workers, so how does this square with the enormous processing power I’m proposing where the power of ten isn't merely a three digit number but is itself a power of ten?

The answer may be this: The sort of quantum mechanical vision I've sketched out above entails quantum systems maintaining the ambiguity of their prospecting signals over huge regions of space. The trouble is, at the macroscopic level we don’t observe an ambiguous reality – the apparent discontinuous jumps of the state vector always seem to contrive an unambiguous macroscopic reality. This “collapsing” of the state vector has the effect of clearing away the work of huge numbers of nodes (*2). So in this light I suspect that Lloyd and Dembski are probably right is reducing the apparent processing power to a mere three digit power of ten. But the enormous computational potential of QM is nevertheless clear; perhaps we simply don’t perceive it because human consciousness is only ever handed unambiguous macro states that have been “collapsed” into a far less ambiguous state; who knows what goes on in between these discontinuities?

I find I can’t dismiss the powerful computational potential of quantum mechanics: QM looks to me as if it is meant to use that potential. In fact, let me speculate a bit: Those prospective signals look like the means by which possibilities are probed before a selection decision is made – in short, an important aspect of intelligent activity; seek, find, reject and select. So, if my guess is right then in quantum mechanics are we watching intelligence at work at the low level, much as we might see the low level neural signaling of human consciousness, if we look very closely; the trouble with the low level view is that it so easily misses the big picture.

An extraordinary cosmos requires an extraordinary explanation: For me the idea of the cosmos being intelligence in action is commensurately extraordinary with the extraordinariness of the universe itself. However, people’s intuitions about what is extraordinary are not necessarily going to concur and may actually conflict. But my excuse is that I’m in uncharted waters here and I can only present what are my own personal speculations and intuitions about the nature of the universe. This is no basis for accusing those who don’t agree with me about God as having bad consciences and perhaps even dark hearts, or as is the habit of fundamentalists of accusing disbelievers of “suppressing the Truth in unrighteousness”!  This sort of epistemic arrogance leads to tribalisation and at the extreme end can even be used to legitimize genocide.

...to be continued (This is a work in progress)


Footnotes
*1 This filter is a useful heuristic within the cosmos, but it falls over badly in a theological sense when used as an argument for God; with its clear cut "natural forces" vs. "intelligent agency" distinction, God competes with his own creation as an agent of causation.
*2 In linear quantum mechanics there is symmetry between momentum space and geometric space, but in this node simulation of QM it is meaningless to talk about momentum space being composed of a set of mutually signaling nodes. In this simulation picture momentum emerges as a synthetic function of the gradient of the wave function.

Notes
On indistinguishability: The indistinguishability of fundamental particle makes the idea of the distinguishability of substance problematical. I suggest that quantum physics doesn't create any “substance” (whatever “substance” means) but rather creates configurations taken from the platonic realm. Distinguishability is a result of distinguishability of configuration and therefore identity of substance is bound up with identity of form.
On Fine Tuning: In the quantum mechanical equation in this article V is a given; so how do we know from the outset that a given V will support organic forms? The answer to this question may be along the lines of the Church-Turing thesis; namely, that there is such a thing as the universal construction set, a set that will build anything (or compute anything). Thus construction sets can be judged on whether or not they are universal construction set complete. We could define a “fine-tuned” construction set as one of those construction sets that is universal construction set complete. My guess is that there is probably an infinite number of universal construction sets. Conceivably, the space of all construction sets could itself be explored with a signaling system.

No comments:

Post a Comment