Sunday, May 22, 2016

You can do this at home: Measuring the wavelength of light.

Homemade micrometer for measuring the wavelength of light using shadow fringes.

In the spring of 1970 I was playing around with a convex lens at night. For some idle reason I pointed the lens toward the window and looked through it at the street lamps in the distance. Of course, if you butt your eye up against a convex lens and look at a distant lamp it becomes a blurry blob of light. But what caught my attention was that although I was about 6 feet from the window pane I could see silhouettes of the marks and imperfections on the window not only in astonishing detail but they were also surrounded with very clear fringes of wave interference. I was in the sixth form at the time and in physics classes we had done the usual Young’s slits experiment with its well-known striped interference pattern. As is well known Young’s slits is an experiment which has tested our concept of reality sorely and consequently it has been the cause of much angst in physics. But this experiment requires quite a sophisticated laboratory set up and yet here quite accidentally I was confronted with a similar observation. I jotted down some notes (with a fountain pen!) explaining what I thought I was seeing and these notes can be seen in the scan below:

What’s happening here is that when the relaxed eye is looking through the lens it is seeing a magnified image an inch of two in front of the lens; this is the point where you would place an object if you want to examine it using the lens. But because in this case there was nothing at the lens’ focal point what I was actually focusing on were the shadows formed by the marks on the window. I could see the edges of those shadows in sharp outline albeit fringed by interference patterns. A bonus was that the fringes increased in size the further I got from the window. But this magnification effect was offset somewhat by the fact that the street lamps, although distant, weren’t true point sources of light and so the further away I got from the window the more the shadows and the fringes started to blur as umbra effects set in (Conceivably this effect could be used as method to measure the angular size of the light source).

An elementary version of the wave theory behind this effect, which I was working toward in my jottings, can be worked out as follows:

In the above diagram the vertical blue line represents the incoming wave front from the approximate point source.  This wave front meets an obstacle shown in green. The shadow cast by the obstacle is shown in black on the right of the diagram. If light motion was described purely by geometric mechanics then for a point source the shadow would be precisely sharp. But of course light motion is described using wave mechanics and this explains the interference fringes. In the model above we imagine the top of the obstacle (which I have labelled as “A”) to become the source of a reflected secondary wave. When this secondary wave is combined with the incoming plane wave it produces an interference pattern in the region of B.

At point B the reflected wave has traveled a distance AB whereas the plane wave front has travelled a distance equal to AC.  Now, let as put AB = h and AC = d and BC = x. If the wavelength of light is l then for the reflected wave and the plane wave to form a node of complete cancellation this requires a difference in their travel distance of half a wavelength. That is, for a dark node to be seen at B we require:
…where n is an integer.


Since d >> x we can use the binomial approximation and 2.0 becomes:

After some manipulation this becomes:
Note that n increase much faster than x and hence the convergent appearance of the fringe lines surrounding an obstacle. For n = 1
Some years after working out this theory, in 1976 to be precise, I built the micrometer shown in the picture at the  head of this article. I used this micrometer to measure the distance to the first dark node for the shadows cast by an approximate point source. In this experiment I set d = 18cm and measured x to be 0.029cms. Hence, using equation 5.0 I measured the wavelength of light to be about 4.7 × 10-7m.  As I only had a white light source this value is a ball-park figure and this ball-park is the window of the visible spectrum which ranges from 4.00 × 10−7 to 7.00 × 10−7 m.

This experiment, based as it is in theory that goes back to Huygens, is pretty passé. But even so there are several profound features here worthy of note:  

1.   .The micrometer I was using was a homemade affair which employed a standard 60 threads/inch screw. As I was attempting to measure to the nearest 1/10  of a turn this means I was trying to measure to 1/600th of an inch. That works out at 4.2 x 10-5 m. And yet the wavelength I was measuring was about 100 times smaller than that. So, this home-brewed experiment was effectively measuring distances of at least ½ millionth of a metre. This accuracy is achieved because in equation 4.0 we see that d effectively magnifies the value of x, although only by a factor proportional to the square root of d.

2.      All interference patterns of this kind pose the well-known conundrum of quantum mechanics:  Viz: If the point source was emitting one photon at a time we would still get the interference pattern when integrated over long enough time, implying that even a single photon is associated with an extensive wave field and therefore sensitive to the material geometric layout across a volume of space; photons are not localised point particles until the wave front “collapses”.

3.    There is a peculiar (and profound) feature of the theoretical model I used to derive equation 5.0. In this model we imagined the plane wave and the reflected secondary wave to travel independently to point B. But if this is so then to get a predictable interference pattern the precision with which the waves fill the space between the obstacle and the shadow is nothing short of mind-boggling. For let’s say there was a small percentage random variation in the lengths of each of the waves between obstacle and shadow. Given the very high number of waves between these two points then this variation, if modeled as a version of random walk, would likely build up between obstacle and shadow. This would mean that the synchronisation of wave effects required for equation 1.0 to work would be disrupted and either a spurious variable value of x observed or the interference pattern too blurred to be seen. Presumably the length d could be many miles and yet the interference pattern still observed implying that each wave has the same length to a very high precision indeed. The observed wave fringes may be an indication that space itself is made up of discrete numbers of nodes with exact numbers of these nodes supplying the accuracy needed for the interference phenomena.

Tuesday, May 17, 2016

Search, Find, Reject and Select

Searching for configurations
In my two series of articles Melencolia I and The Thinknet Project I've been trying to get a handle on the mechanics of intelligence and its relevance to evolution. One aspect of intelligence I feel fairly sure about is its general teleological structure which can be expressed by the rubric Search, find, reject and select. That is, it is a necessary (but not sufficient) condition of intelligence that it has general goals, sometimes vaguely expressed. This entails searching some kind of space of possibilities and testing and rejecting cases with the ultimate aim of selecting a solution to those goals. Clearly, this very general conception is short on detail, detail which no doubt in any working intelligence would be elaborated to the nth degree. But, I propose, the general idea of goal seeking is at the heart of all intelligence. The elaboration of intelligence is, in fact, self-explained by my general characterization of intelligence: Viz: Teleological targets can often be expressed quite simply e.g. We are looking for a way to fly from A to B. But to practically fly from A to B with efficiency requires a highly elaborated configuration of materials. Likewise, if our goal is to find a system that itself seeks and finds targets then a real target seeking system is itself likely to be highly complex. In short the complexity of intelligent systems is explained by the essential nature of intelligence itself -  as is so often the case in the subject of cognition we find self-affirming self-reference to be at the heart of it. Sometimes intelligence is characterised as  "complex adaptive systems" (cf John Holland). This characterization has some merit, but it also amounts to a form of goal seeking; in this case the goal of  adapting. 

The foregoing is an introductory preamble to some remarks made by Denis Alexander in his book "Creation or evolution? Do we have to choose", remarks I would like to showcase.  As I have worked through this book what piqued my interest most recently is Alexander's section on "natural selection" in chapter 4. Now, as I have said before unless the informational front-loading of the spongeam exists (which I actually doubt) then conventional evolution is not going to be the underlying mechanism of change in evolution: The evolutionary search space is simply too large to be explored by the parallel resources of non-quantum trial and error processes such as is envisaged in ordinary evolution. But having said that Alexander describes some other biological processes in his book which are known to exist and which work using the universal search, find, reject and select structure and these mechanisms are not controversial (I think). Alexander  refers to these biological versions of "natural selection" in  the following general terms:

....we see the same principle of abundance and selection operating time and again.....Jesus himself used the same idea in his famous parable of the sower who needs to scatter far more seed than ever will germinate and lead to a good crop (Matthew 13:13ff).

He then goes on to give some real biological examples (See pages 103 to 105):.

1. In the development of the brain neurons send out many "exploratory feelers" to other neurons and only the  fruitful connections are kept, the others die.

2. Particularly fascinating was Alexander's description that "During the development of B cells a specialised region of our genome undergoes intensive  random cutting  and rejoining of the pieces of DNA that encode different parts of the antibody protein. This results in the production of millions of different anti bodies, each one specific for a particular type of invader...."

3. Further, Alexander describes how when B cells replicate in response to an invader the antibody on the surface of the B cell changes during replication via a mutation mechanism. The B cells with the antibodies that best bind to the invader are kept and the rest are eliminated.

All very interesting, very interesting indeed. I might go as far as to call these quasi-intelligent processes; that is, they are a goal controlled searches - or what I refer to as "back-loading".  However, these processes use "real" materials as the search feelers rather then the tentative and readily expendable feelers of quantum mechanics. Of the latter kind of search feeler we have to turn to the exciton which transfers energy in photosynthesis. See here  for the following Wiki entry (My emphases) :

In 2007 a quantum model was proposed by Graham Fleming and his co-workers which includes the possibility that photosynthetic energy transfer might involve quantum oscillations, explaining its unusually high efficiency.[3]

According to Fleming[4] there is direct evidence that remarkably long-lived wavelike electronic quantum coherence plays an important part in energy transfer processes during photosynthesis, which can explain the extreme efficiency of the energy transfer because it enables the system to sample all the potential energy pathways, with low loss, and choose the most efficient one.

This approach has been further investigated by Gregory Scholes and his team at the University of Toronto, which in early 2010 published research results that indicate that some marine algae make use of quantum-coherent electronic energy transfer (EET) to enhance the efficiency of their energy harnessing.[5][6][7]

There we have it: Quantum searching. 

Wednesday, May 11, 2016

The Thinknet Project. Footnote: On Self Description

The following is an introductory chapter to a mathematical manuscript I am compiling from old notes written in the 1980s on the subject of disorder. This chapter deals with the set theoretic categories I use to develop the idea of disorder. However, I'm also including it as a footnote to my Thinknet series as it touches on the subject of "self-awareness" and incomputability. 

Foundational Issues

1        Introduction
For me the subject of disorder starts with a simple model: I imagine a set of items where these items are not envisaged to have any particular orientational or sequential relation to one another; as in a set of database records the order in which the items occur, if any, is unimportant, irrelevant in fact. I then imagine a variety of properties being assigned or distributed over these items thereby leading to classes of items. This item-property model is essentially a literal interpretation of the Venn diagram. The points in a Venn diagram are thought of as concrete items which can be collected together in various classes according to the properties those items have: In Venn notation we draw a circle round a set of points to represent a class, where each item in the class has some defining property. If we draw two circles based on two classes each defined by their particular property then we can then represent the relationship between these two classes by the extent of the overlap, if any,  between the two classes; an idea familiar to anyone who has seen a Venn diagram. If we add more property defined classes we may end up with a complex picture looking something like:
Figure 1: Many overlapping classes

This picture of the Venn diagram gives us a simple concrete model of set theory which immediately circumvents Russell’s paradox. It does this by distinguishing sharply between classes and the items of which the classes are composed; that is, classes are collections of items but the items are not themselves classes. This means that it is not possible to create a class of classes, a construction from which the paradox arises; in the Venn picture one can only form classes from items. This is a very easy and elementary way of preventing Russell’s paradox, although there are of course big mathematical disadvantages in disallowing classes of classes. Von Neumann’s version of axiomatic set theory for instance, which is a more sophisticated solution to Russell’s paradox, makes a distinction between classes which are not elements and classes which are elements. This prevents Russell’s paradox without sacrificing the possibility of allowing classes of classes.

However, there is a way of extending the Venn model in order to allow classes of classes without incurring the penalty of Russell’s paradox. We take a second Venn diagram that maps the classes in the first diagram to the points (= items) of this second diagram and then uses the properties of these second order items (which in fact includes classes mapped from the first diagram) to group them into classes. Provided only allow mappings from the first Venn diagram to the second Venn diagram, and not the other way round, this hierarchal strategy for eliminating Russell’s paradox works because in the hierarchal model a class can never be a member of itself; a class can only be a member of a higher order class in another Venn diagram.

The foregoing solution to Russell’s’ paradox reminds me of a picture used by Karl Popper at a lecture I attended by him at Kent University in 1971. He imagined a man drawing a map of the room in which he was sitting. To be thorough the man would have to eventually draw his own map as it is part of the room and this would include the very marks on the map!  If you think about it this leads to an infinite regress: The map would include a picture of the map which would include a picture of the map etc etc. This is not necessarily a very tractable solution to Russell’s paradox, but it’s a possibility that can be conceived. Thus the system which can boast full self-awareness must be infinite!

2       Self-description, conceptual feedback and contradiction
If a class is permitted to be a member of itself then it opens the possibility of contradictory self-description. (a form of self-reference). In contradictory self-description it is possible for the act of self-description to render the description false. For example consider the following sentence: “This sentence contains a reference to itself”. The latter sentence is self-referencing in that it talks about itself; it is telling us that it contains a reference to itself which of course in this case it does. This is in fact a case of consistent self-description; that is, this sentence contains non-contradictory self-description and the sentence states a truth about itself. On the other hand compare the following phrase: “Hello world!”. Nothing wrong with that of course but it is clearly a phrase which contains no reference to itself. OK, so if it contains no reference to itself let’s affirm this and rewrite it as it as: “Hello World, this sentence contains no reference to itself”? This latter sentence now contains a reference to itself, but as this reference is denying that it contains a reference to itself it thereby contradicts itself…. It is not possible, without contradiction, for a sentence to contain no reference to itself and at the same time this sentence describe itself as lacking in self-reference.

As in the case of self-descriptive sentences, when we open up the possibility of a class being a member of itself self-description and therefore self-contradiction becomes a liability. We can see this from the following analysis: A class is defined by a property; thus a class is a way of describing the items in that class because the class is telling us that each item has some property, let us call that property “X”. Therefore if a class, let’s call it C, is member of itself then C is effectively describing itself because it is telling us that C must also have property X. But whether or not the class C as defined by X can be formed without contradiction now depends on the nature of X. For if X is something like “The class of classes that don’t contain themselves” then we are quickly faced with Russell’s oscillating contradiction: Viz: If C isn’t a member of itself then X requires that we must put C inside C thus making it a member of itself. But if C is now a member of itself it then violates the stipulation that X = all classes that aren’t members of themselves. And so we oscillate between putting C inside itself and them taking it out. This kind of conceptual behavior can be likened to a form of unstable feedback. This is something I developed in this blog post.

As I have already said, employing a second level Venn diagram is one way, if an unconventional way, of solving Russell’s problem; here one can only form classes of classes using a higher or meta-level Venn diagram whose points (or items) represent classes in the lower level diagram. Thus it becomes impossible for a class to be self-descriptive because classes are never members of themselves thereby preventing the possibility of an unstable oscillatory conceptual feedback loop. In the hierarchal system where one Venn diagram looks down on and describes another Venn diagram the information only moves one way, namely bottom up, thus preventing contradictory “feedback loops” forming.

This method of using a hierarchy of Venn diagrams is probably not a convenient solution for conventional mathematics, but it works for open ended real world ontologies where systems are not necessarily in isolated self-containment as required by axiomatic mathematical systems. In the real world ontologically separate systems can take up the job of describing other systems without getting into unstable/contradictory “feedback loops”.  As we shall see in the next section this realization helps solve the conundrum raised by Penrose in his two books “The Emperor’s New Mind” and “Shadow Minds”.

     Penrose: Is the human mind an incomputable process?
In his books “The Emperor’s New Mind” and “Shadow Minds” Roger Penrose concludes that the human mind is an incomputable process. I dealt with this matter more thoroughly in the blog posts here and here, but in the following I rehash some of my reasoning against this conclusion.

Penrose defines a class of algorithms using the symbolism Cq(n) – this notation represents the qth computation acting on a single number n.  In fact it must be stressed that the set Cq(n) is taken to enumerate all possible algorithms that take the number n as a parameter. Penrose then goes onto demonstrate a version of the halting theorem. This theorem tells us that there is no general algorithm which can in all cases be used to correctly test the halting status of any other algorithm and flag this status by itself halting if the tested algorithm tested doesn’t halt. As Penrose then shows this is because the algorithm which tests for halting status must itself be one of the Cq(n) and therefore when the halting-test algorithm is submitted to itself typical contradictions of self-description arise. Viz: If the halting-test algorithm halts when testing itself it would be trying to tell us that it doesn’t halt when testing itself!  Another way of saying that is: When submitted to itself the halting-test algorithm is supposed to halt if it doesn’t halt! So, in the very act of trying to tell the truth about itself the halting-test algorithm invalidates that “truth”; this is a typical self-descriptive conundrum.

Penrose submits that the way to remove this contradiction is to propose that the halting-test algorithm fails to stop when it is submitted to itself. But this means that it is thereby unable to determine the truth about itself. According to Penrose the set defined by Cq(n) is universal enough to include any algorithms that are running in the human mind, so here’s the twist: If we as humans can see this algorithm doesn’t halt, but the universal halting-test algorithm fails to reveal it, this must mean that the human mind is doing something an algorithm can’t. Ergo (according to Penrose) the human mind must be using an incomputable process.  

The real world is open ended and its ontologies are not self-contained: One ontology can always be described by another ontology thus removing the potential for contradiction when a single ontology attempts to describe itself. My proposal is that the contradictions in Penrose’s example arise through self-description. So, even though the human mind may be running one of the Cq(n) in form the ontology of the mind is a meta-ontology separate from the symbolic manipulations of Penrose’s self-contained algorithmic enumeration. Because the human mind is effectively one-up the descriptive hierarchy it therefore does not become entangled with the self-descriptive contradiction that Penrose finds to exist inside the set of Cq(n).  Therefore because the mind is meta to the ontology of the Cq(n) the contradiction which arises out of the self-description in the halting-test algorithm does not arise in the human mind. In this instance the human mind is not attempting to describe itself; rather it is describing something which is other than itself. Hence being ontologically at a higher level the human mind isn’t hamstrung by the contradictory self-referencing loop which occurs inside Penrose’s algorithmic set. But... and this is the big ‘but’ …. if the human mind turns in on its own algorithms and attempts to make descriptions about itself we find that the conceptual feedback loops which are liable to create self-descriptive contradictions and which result in the halting theorem reassert themselves.  In short the human mind isn’t above algorithmic self-referencing limitations and I am therefore inclined to reject Penrose’s conclusion that human thinking is an incomputable process.

Tuesday, May 03, 2016

The Sun Isn't a Star!!?? Light Relief Time

No surprise old man Sun has got a smirk on his face!
That's right, according to some Christian fundamentalists the Bible tells them that the Sun can't be a star. Let's start at the fairly moderate (?) end of the spectrum with Answers in Genesis' Danny Faulkner. At the end of this AiG link:

....we find an article by Faulkner entitled "Not Just Another Star". Faulkner tells us that (My emphases): 

Given everything we now know about the brightness of other stars, it’s fashionable today to call the sun a star, even an average star. But is that really the case? While the sun has many characteristics similar to stars, the Bible never refers to it as a star.

My first reaction is "Of course the Bible isn't going to refer to the Sun as a star, that requires modern science!" .....but more about that later. Faulkner then goes on to say that the Sun is rather unique compared to the stars because:

a) The Sun is relatively lithium sparse, although Faulkner doesn't know if this is of (anthropic) significance.
b) The Sun is relatively stable in its energy/particle output compared to similar stars - a fact necessary for life on Earth.

Starting to crack up...
Even assuming Faulkner's facts are right it is difficult to register them as startling. The solar system as a whole with its many adjustable variables such as planet spacing, size and composition etc. is not going to be duplicated very often elsewhere in the Galaxy, least of all the peculiar conditions of the Earthly environment. Remember also how different and unique are many of the planets themselves - each seems to have its own story. In short, uniqueness isn't very unique! Of course Christians can see providence are work here, but given our modern scientific perspective there is not enough reason in my view to be coy about classing the Sun as a star and therefore calling it a star. It appears that on the basis of his slavish Biblical literalism Faulkner is reluctant to even call the Sun a star!

But fundamentalist David Lowe goes much further than Faulkner. Taking the pre-scientific Biblical perspective on the Sun as sacred Lowe straightforwardly denies that the Sun has anything in common with stars and even suggests that the Sun doesn't use nuclear power but instead he supports the "electric Sun" theory.  His ideas about the Sun can be read on this document. His web site is: fits of laughter....

I have made mention of  this guy before: See here.  

But really we can very easily see why such people think the Bible teaches that the Sun isn't a star, just as some fundamentalists think the Bible teaches geocentricity or a flat Earth. In all cases we recall that the Bible is written from the perspective of the people of the day; not having access to sophisticated science meant that they described and categorised the cosmos from the point of  view of folk-level observation unaided by instruments and modern theoretical narratives. Given this fact it is no surprise that the all too obvious Sun is put into a category of its own - from the perspective of plane observation, not to mention life on Earth's Solar dependence, such categorization is perfectly understandable, entirely legitimate and Biblical!  For most of the history of the human race the Sun has, from the point of view of appearance, dominated life on Earth and therefore looked to be of an entirely different genus to the stars. This human perspective effect is enough to justify and explain the special place that the Sun has in the Bible. But that special place is down to a pre-scientific culture. Great differences in appearance etc. relative to location do not in themselves necessarily imply intrinsic category differences. The same applies to geocentricity and flat Earth. The perspective driven Bible majors on neither the shape of the Earth nor its relative position in the Cosmos. (But see here). Faulkner and Lowe like other fundies are treating the Bible as if it were a formal science textbook. Add that to fundamentalist's religious observance based obsession with "Obey! obey! obey! submit!" then we can see why categorizing the Sun as a star is taboo for Faulkner and Lowe.  

Finally the joker in the pack. This article entitled "The Sun is not a star" is clearly on a spoof website. But having said that I find myself peering very carefully even at spoofs like this; I don't want to become a fool to Poe's law! But this site is worth a visit: it goes to show how fundamentalists are making Christianity a laughing stock - in fact I find myself laughing too!

Tuesday, April 26, 2016

Evolutionary Theory vs. The Theory of Evolution

Who or what is driving evolution?
Evangelical atheist Professor Larry Moran writes a very useful blog post here. The title of the post is "Don't call it the theory of evolution". His justification for this title, and I probably agree with him, is that evolution is, in fact, a cluster of ideas and theories about the mechanism of evolution which are better termed "Evolutionary Theory" since there really isn't such a thing as The Theory of Evolution. This is what he says (my emphases):

What do scientists really mean when they refer to "The Theory of Evolution"? There is no single theory of evolution that covers all the mechanisms of evolution. There's the Theory of Natural Selection, and Neutral Theory, and the Theory of Random Genetic Drift, and a lot of theoretical population genetics. Sometimes you can lump them all together by referring to the Modern Synthesis or Neo-Darwinism

Instead of using the phrase "The Theory of Evolution," I think we should be referring to "evolutionary theory," which may come in different flavors. The term "evolutionary theory" encompasses a bunch of different ideas about the mechanisms of evolution and conveys a much more accurate description of the theoretical basis behind evolution. Douglas Futuyma prefers "evolutionary theory" in his textbook 'Evolution' and I think he's right. It allows him to devote individual chapters to "The Theory of Random Genetic Drift" and "The Theory Natural Selection."

Larry goes on to quote Douglas Futuyama (I have the same book, Evolution) who actually gives a very general definition of evolution that Larry himself has touted. I emphasize the aspects of this general definition in the bold emphases below: 

So is evolution a fact or a theory? In light of these definitions, evolution is a scientific fact. That is, descent of all species, with modification, from common ancestors is a hypothesis that in the past 150 years or so has been supported by so much evidence, and so successfully resisted all challenges, that it has become a fact. But this history of evolutionary change is explained by evolutionary theory, the body of statements (about mutation, selection, genetic drift, developmental constraints, and so forth) that together account for the various changes that organisms have undergone.

We now know that Darwin's hypothesis of natural selection on hereditary variation was correct, but we also know that there are more causes of evolution than Darwin realized, and that natural selection and hereditary variation themselves are more complex than he imagined. A body of ideas about the causes of evolution, including mutation, recombination, gene flow, isolation, random genetic drift, the many forms of natural selection, and other factors, constitute our current theory of evolution, or "evolutionary theory." Like all theories in science, it is a work in progress, for we do not yet know the causes of all of evolution, or all the biological phenomena that evolutionary biology will have to explain. Indeed, some details may turn out to be wrong. But the main tenets of the theory, as far as it goes, are so well supported that most biologists confidently accept evolutionary theory as the foundation of the science of life.

What we have here is a concept of evolution that can be roughly characterized as mere continuity of change with the full range of proposed mechanisms of change up for grabs. This form of evolution which is accepted as "fact", is so general that it could include all sorts of hidden mechanisms that entail "modification" perhaps even the tinkerings of an intelligent designer! - not that I'm necessarily suggesting that, but just to illustrate the generality of the fact of evolution! This kind of evolution as "fact" is a very broad tent indeed! Larry has posted on this very general definition of evolution before and I did a post on his post here

However, it is quite obvious that Larry and Futuyama wouldn't actually have the mechanism of intelligent modifications in mind; rather they are likely to opt for mechanisms implicit in  the mathematical "law and disorder"* objects of the physical canon. In fact as we read above both Futuyama and Larry seem confident that the main causes of evolution have been nailed. Nevertheless, there is just enough maneuver room here for the homunculus Intelligent Designers!

Here is one of the statements by Larry which I quoted in my original post on this general definition of evolution (My emphases): 

Neil deGrasse Tyson said that the theory of evolution is a fact. This is not correct. Evolution is a fact. Evolutionary theory attempts to explain how evolution occurs. Some of the explanations, like natural selection, are facts but many aspects of modern evolutionary theory are still hotly debated in the scientific community.

And by "evolution" Larry doesn't mean a great deal more than continuity of change. However, I doubt if they will be debating if a homunculus is involved!

Finally in his post Larry concludes with:

When you're talking about the mechanisms of evolution, please use "evolutionary theory" instead of "the theory of evolution."

Will do!

* Or "law and randomness". By this I mean that the calculations of the physical canon use both short time, small space algorithms and the statistics of disorder. See here for more technical details on the meaning of short time, small space algorithms and randomness. The deep question is why should these relatively simple mathematical objects predominate when other possibilities exist?

Sunday, April 17, 2016

Mind the Gaps

In a post on the ID Website Uncommon Descent entitled Casey Luskin on Theistic Evolutionist’s evidence-phobia contributor Denise O’Leary quotes de facto ID guru Casey Luskin as follows:

Picture originally found on "Sandwalk" The speech bubble is mine.

Of course, when BioLogos claims “it is all intelligently designed,” they mean that strictly as a faith-based theological doctrine for which they can provide no supporting scientific evidence. Indeed, it’s ironic that BioLogos accuses ID of “removing God from the process of creation” when Collins writes that “science’s domain is to explore nature. God’s domain is in the spiritual world, a realm not possible to explore with the tools and language of science.” Under Collins’s view, God’s “domain” is seemingly fenced off from “nature,” which belongs to “science.”

My Comment: Here we go once more unto the breach dear friends: Western Dualism’s nature vs. theology dichotomy! What’s the point of theology if it isn’t responding to the empirical conditions of the human predicament by attempting to provide, however inadequately, a world-view level account of it? Under any circumstances theology is not fenced off from “nature”. If nature = creation and humanity is part of creation then any experience/observation/thought we have, based as it is in the created psyche of our humanity will by definition also be part of creation and therefore classify as “nature”. Ergo, theology, which presumably attempts to make sense of the broad sweep of human experience, is inextricably bound up with so called “nature”.  But admittedly, theology, like string theory has more the role of providing postdictive sense making narratives rather than that of predictive testability.

Since CIDs [Christian intelligent design supporters] treat design as a scientific hypothesis, not a theological doctrine, they would reply that a failure scientifically to detect design doesn’t mean God was somehow theologically absent, and would say that natural explanations don’t “remov[e] God.” BTEs [BioLogos theistic evolutionists] thus fail to recognize that CIDs have no objection to God using natural, secondary causes. They also fail to appreciate that in some cases, CIDs argue that natural explanations can even provide evidence for design (e.g., cosmic fine-tuning). But CIDs disagree with BTEs that God must always use natural causes, and argue we should allow the possibility that God might act in a scientifically detectable manner Thus, one important dividing line is:

• BTEs accept materialistic evolutionary explanations (such as neo-Darwinism) where the history of life appears unguided, and deny we scientifically detect design.

• CIDs hold we may scientifically detect design as the best scientific explanation for many aspects of biology

My Comment: I think you will find that in principle de facto IDists like Luskin understand “natural causes” to be those explanations which fall within the present canon of physics or, presumably, any future development of that canon (Although as we will see below in practice the IDist’s so called “natural causes” actually refer to the much dreaded evolution). The IDist’s explanatory filter defaults to intelligent agency when the physical canon fails as an explanation. But the explanations of physics inevitably face an ultimate logical hiatus or explanatory gap; this is because physics is in effect descriptive and therefore its final and complete word can only be a kernel of logically compressed brute fact; physical explanations can do no more and no less. Hence, the explanatory filter will eventually default to intelligent agency when the ultimate logical hiatus is arrived at. The pertinent question is at what point is the gap going to be found? Is that gap going to be found at the level of biological configurations; that is, are biological structures fundamental givens? Or is the gap going to be found at the fine tuning level where once the physical canon has been set up and correctly tuned the cosmos will then generate life? If, repeat if, Luskin is just talking about this general logical hiatus then I would question his claim that his kind of ID has a formal scientific status. After all, a grand logical gap is mathematically destined to be part the physical cannon under any circumstance and will exist where ever it is found. And if humans have anything to do with it the information inherent in this logical gap will inevitably prompt debate about its origin (This is why my version of the “explanatory filter” is recursive). The ensuing debate is likely to have a strong philosophical and theological slant. Thus arguing for God on the basis of an inevitable logical hiatus will probably veer towards theology and/or philosophy rather than formal science.

But we know that as a rule de facto IDists actually have a deep raison d’etre for insisting that a logical hiatus exists inside biology itself, not just generally in the canon of physics. For rather than trace the gap all the way back to the physics of fine tuning and the abstruse and contentious philosophico-theological posturing about the origins of physics, they much prefer to bring the gap closer to home; namely, at the level of biological configurations. And we know what that means: De facto IDists like Luskin hate evolution and will claim evolution didn’t happen (because intelligence did it!).  Whether conventional evolutionary theory works or not is something that is subject to testing. So, in this sense, biologically based intelligence–of-the-gaps sidesteps the highfalutin philosophical questions about ultimate origins and actually becomes scientific, although it is very negative science of (evolution) denial.

As I have remarked before in this blog this commitment to anti-evolutionism is potentially toxic to theology if some version of evolution is ultimately found to work. Luskin’s ID, although he may not bring himself to be very explicit about it, is very dependent on the failure of evolutionary theory. Luskin’s so called “scientific hypothesis” is not about the philosophico-theological issues which surround the question of the grand logical hiatus but rather the strong North American Christian right “No! No! No!” to evolution.  When Luskin accuses Biologos of requiring God to always use “natural causes” he can’t be accusing them of trying to do away with the grand logical hiatus because that is logically impossible. What he really means is that Biologos’ loathed publicly funded establishment academics are evolutionists!

Notice that Luskin wrongly refers to evolution as unguided. As I have repeatedly attempted to make clear on this blog even standard evolution is far from unguided – it very much depends on the up-front-information needed to “guide” it in the form of the channels of the spongeam, which if they exist (although they probably don’t at my guess) would have to be implicit in the canon of physics and/or future developments of that canon.

[A]ccording to textbooks and leading evolutionary biologists, neo-Darwinian evolution is defined as an unguided or undirected process of natural selection acting upon random mutation. Thus, when theistic evolutionists say that “God guided evolution,” what they mean is that somehow God guided an evolutionary process which for all scientific intents and purposes appears unguided. As Francis Collins put it in The Language of God, God created life such that “from our perspective, limited as it is by the tyranny of linear time, this would appear a random and undirected process.” Whether it is theologically or philosophically coherent to claim that “God guided an apparently unguided process” I will leave to the theologians and the philosophers. ID avoids these problems by maintaining that life’s history doesn’t appear unguided, and that we can scientifically detect that intelligent action was involved.

My Comment: The premise that pervades this paragraph falls over because as I’ve already said conventional evolution, on its own logic, is guided – that is, it effectively posits the implicit information of the spongeam, a requirement that is related to Dembski’s conservation of information. Because testing evolution amounts to testing for the existence of the spongeam then the question of its existence is subject to formal scientific investigation. On the other hand the question of the origin of the information in the spongeam, which would have to be implicit in the physical canon, concerns that final logical hiatus I’ve already referred to and is therefore potentially philosophical and/or theological.

Theistic evolutionists sometimes try to obscure these differences, such as when BioLogos says “it is all intelligently designed.” But when pressed, they’ll admit this is a strictly theological view, since they believe none of that design is scientifically detectable. CIDs wonder how one can speak of “intelligent design” if it’s always hidden and undetectable. “We’re promoting a scientific theory, not a theological doctrine,” replies ID, “and our theory detects design in nature through scientific observations and evidence.”

Some theistic evolutionists will then further reply by saying, “Since we both believe in some form of ‘intelligent design,’ the differences between our views are small.” ID proponents retort: “Whether small or not, these differences make all the difference in the world.”

And there’s the rub. By denying that we scientifically detect design in nature, BTEs cede to materialists some of the most important territory in the debate over atheism and religion. Biologically speaking, theistic evolution gives no reasons to believe in God.

My Comment: Since the logical hiatus in physics is mathematically inevitable and must ultimately be acknowledged by both Biologos and Luskin, at first site it might seem that if they both use the explanatory filter, both are justified in claiming to be IDists. Therefore Luskin’s claim that theistic evolution gives no reasons to believe in God is false. So what’s the real basis of Luskin’s beef? Well Luskin can’t bring himself to admit it but what he really means by his claim to having a scientific theory is that he is anti-evolution and Biologos isn’t.  But bland anti-evolutionism is not a great way to claim to having a “scientific theory”.  Hence de facto IDists will attempt to make claim to a positive science of “intelligence did it!”.  But this doesn’t hold much water because some de facto IDists will actually tell us that explicating  the nature of the intelligence that "did it " is not part of ID!.  This makes it very difficult to use this “science” in a positive way to make predictions. For example, de facto ID’s belief that there is no junk DNA is problematical given the inscrutability that some IDists build into their intelligent agent. This makes it all but impossible to anticipate the methods, motives and personality of that intelligence; may be that intelligence has some obscure reason for storing redundant and repetitive DNA in the genome.  (See here for a blog post of mine that tries to take a sympathetic view to ID “predictions”)

To be clear, I’m not saying that if one accepts Darwinian evolution then one cannot be a Christian. Accepting or rejecting the grand Darwinian story is a “disputable” or “secondary” matter, and Christians have freedom to hold different views on this issue. But while it may be possible to claim God used apparently unguided evolutionary processes to create life, that doesn’t mean Darwinian evolution is theologically neutral.

According to orthodox Darwinian thinking, undirected processes created not just our bodies, but also our brains, our behaviors, our deepest desires, and even our religious impulses. Under theistic Darwinism, God guided all these processes such that the whole show appears unguided. Thus, theistic evolution stands in direct contrast to Romans 1:20 where the Paul taught that God is “clearly seen” in nature. In contrast, theistic evolution implies God’s involvement in creating humans is completely unseeable,

Theistic evolution may not be absolutely incompatible with believing in God, but it offers no scientific reasons to do so. Perhaps this is why William Provine writes: “One can have a religious view that is compatible with evolution only if the religious view is indistinguishable from atheism.”

My Comment: The first two paragraphs of this passage are incoherent given that conventional evolution is far from unguided; presumably this fact is not “clearly seen” by the likes of Luskin; he can only see biological gaps. But on account of de facto ID’s explanatory filter conventional evolution, with its ultimate inevitable logical hiatus, does offer at least a prima facia case to believe in God contrary to what Luskin says, as I have already stated. Thus, from the point of view of the explanatory filter conventional evolution is theologically neutral. However, that’s not say that it is theologically neutral on the deeper question of whether a Christian God would actually reify such a process.


Finally the post on Uncommon Descent had some snarky concluding comments from Denise of O’ Leary:

So many people marketing theistic evolution these days dislike evidence…… If the evolution scene were what they claim it is, you’d think we’d be the ones not to want evidence. But we totally rely on it and are comfortable with it.

As a science de facto ID is primarily negative. If de facto IDists are loathe to comment on the nature of the intelligence at work the power of ID to provide predictive evidence is compromised. O’Leary’s boast about ID being very evidence based rings hollow; ditto Luskin's claim to de facto ID being strongly scientific. The fact is de facto ID is not a hard science. 

My own attempts at explaining evolution in terms of an immanent intelligence at work require the nature of intelligence to be at least partly unpacked – see here and here. However, let me make it clear that this work is highly exploratory, speculative, tentative and very unfinished. So, I am in no position to bully either atheists or de facto IDists round to my point of view.