Friday, January 22, 2016

Necessary Conditions for Evolution: The Spongeam

Conventional evolution is conceived as an incremental process of change. That has implications for the layout of self-perpetuating, self- replicating structures in configuration space.

The following are some notes on the object I refer to as the spongeam – this is a structure in configuration space that is a necessary (but not sufficient) condition for conventional evolution. The spongeam is a logical implication of evolution given  evolution as it is currently understood.

Evolution is sometimes wrongly caricatured as an unguided chance process.  See for example the following statement by IDist “niwrad” on the website Uncommon Descent. In his UD post niwrad expresses his dismay at theologian Alastair McGrath’s acceptance of “theistic evolution”. In this quote I have added emphases where I believe niwrad errs:

Unguided evolution (whatever meaning we give it, and all the more so if we give it a meaning based on Darwinian evolution, which is what theistic evolutionists and McGrath mean) is a theory based on chance, on randomness, i.e. accidents. If “the universe is not an accident” — as he [McGrath] rightly believes — how can evolution, an engine of accidents, be an explanation of “how the world started”, with the same plausibility of creationism and ID?

“Unguided evolution…a theory based on chance… an engine of accidents” are all phrases that thoroughly misrepresent the true situation with evolutionary theory as it stands today. The answer to niwrad’s last question has been provided many times on this blog: In short, conventional evolution is not unguided but is in fact a random walk process which takes place within a tight envelope of constraint. Therefore McGrath’s apparent belief in theistic evolution is consistent with a belief that the universe is not an accident. Let me expand….

Even though I myself have reservation about evolution (as currently understood) it is certainly not a purely chance process. It is an ironic that the very reason why I can’t easily dismiss evolution on the grounds of entropy is precisely because it is not an unguided process; it is, in fact, a highly channelled form of entropy: For just as life annexes and organises increasing amounts of matter as it populates the world under the constraint of the information implicit in its machinery and yet still keeps within the second law of thermodynamics, so too would evolution. Conventional evolution can only work if “randomness” plays out within a very small probabilistic envelope. (It is arguable – albeit only arguable – that the high information implicit in this envelope is implicit physics)  It is ironic that not many in the de facto ID community understand the conclusion of one of their very own gurus, William Dembski, whose “Conservation of Information”, if applied to the evolution of life, can be written as:

Probability of life given our physical regime,  P(Life | Physical Regime)  = p/q

Where:

p is the a priori probability of living configurations; that is, the probability of selecting a member from the class of living configurations assuming the principle of indifference over the space of all possible configurations. The organised complexity of living things clearly implies that p is going to be extremely small.

q is the probability of selecting a physical regime which enhances the chances of selecting life assuming the principle of indifference over the space of cases consistent with this physical regime.

See this paper for a “proof” of the above relation.

To give life a realistic probability of coming about the value of q, which is a measure of the tightness of the constricting envelope, must be very small. It is this constricting envelope which I identify as an object I call the spongeam. In the points below I develop the concept of the spongeam stage by stage on the assumption that standard ideas of evolution apply:

  1. We know that there is a subset of configurations which are stable enough to self-perpetuate and self-replicate (I refer to these self-perpetuating self-replicating structures or SPSRSs). One fairly compelling corollary of this definition of SPSRSs is that they must be highly organised to carry out their tasks and therefore constitute a very tiny set of possibility when compared to the overwhelming size of configuration space consistent with our physical regime.
  1. An important question is this: How does the set of SPSRSs populate configuration space? What does this pattern look like when viewed across configuration space?
  1. If conventional evolution has occurred then certain conditions must be true of this set… Viz: Because conventional evolution is stepping through configuration space in small steps it follows that for an SPSRS set to be favourable to evolution, then that set must be fully connected in as much as any one member must be no further than some typically maximum distance from at least one other member of the set; let’s call that distance s.
  1. If conventional evolution has been actualised in paleontological history it follows that H steps of typical step size s separates us from the first ancestors. That is, there must be at least one unbroken path between us and those antecedent structures.
  1. But even if this connected set exists there is another necessary condition of evolution: If we select any particular SPSRS and then amend it in m steps of typical size b, always ensuring that each step moves away from the selected SPSRS, then the high order of SPSRS implies that most paths will lead to non-viable structures – that is, structures that are not SPSRS. Thus, we have a second necessary condition for evolution: Viz: compared to the number of outgoing paths which lead to non-viability there must be a sufficient number of paths connecting an SPSRS to the rest of the set for it to have a realistic chance of stepping to another slightly different SPSRS.  Therefore, the set of SPRSs must not only be connected but the connections must be sufficiently strong to give a realistic chance of random evolutionary stepping across the spongeam. 
  1. There is one aspect of this matter which I consistently miss out for simplicity’s sake; this is the fact that populations of SPSRSs become their own environment and therefore there is a feedback relation between SPSRS’s and their environment. This feedback is likely to be non-linear and therefore gives rise to chaos, thus compounding the difficulties besetting human efforts to get an analytical handle on the spongeam.
Ignoring point 6, then the foregoing creates in my mind’s eye a kind of static spongey looking structure in configuration space; that is, a network linking the set of SPSRSs into a connected set. Viz:

If the Spongeam exists it might be the multidimensioanl equivalent of this 3d sponge. 

If you imagine this structure to be far more attenuated than is shown here, then the result would be an approximation to what I envisage to be the kind of structure in configuration space which is a necessary condition of evolution. The next question is; how does movement take place from one point to another in the spongeam thereby giving us an evolutionary dynamic? The answer to that question (and the answer I’ve assumed above) is that in standard evolution the motion is one of random walk – that is, diffusional. But random walk or not, it is clear that the tight constraint which the spongeam puts on the random motions gives the lie niwrad’s claim that evolution is unguided. (See also atheist Chris Nedin who, ironically, is also loath to admit that evolution is an inevitably guided process. The ulterior motive here may be an unwillingness to accept that any workable model of evolution, as William Dembski has shown, must be tapping into some source of information).

The spongeam is more fundamental than the fitness surface. The fitness value is kind of “field gradient” that permeates the network of the spongeam and biases the random walk in places, but not necessarily everywhere (See here for an example of an equation for biased random walk).

The above points, 1 to 6, are an annunciation of the very general mathematical conditions which are logically necessary for conventional evolutionary processes to take place. But I must voice my standard disclaimer here: My own bet is that the minuscule set of viable self-perpetuating replicators is unable to populate configuration space with a sufficient density to form a connected set which facilitates standard evolution. Doubts on this point are the main reason why I have branched out into my speculative Melencolia I series.

What is the evidence for the spongeam? Since the spongeam is a necessary condition for conventional evolution the evidence for it is as strong (or weak) as the evidence for evolution. As I’m not a biologist I try to avoid commenting on the strength or weaknesses of the observations for evolution. But these observations are nevertheless important:  In view of the analytical intractability of the spongeam the human alternative to analytically proving or disproving the existence of the speoneam from first principles can only be found in observation.

One important point I would like to make here is that the existence of the spongeam would mean that evolution works by using front loaded imperative information rather than the backloaded information of a declarative computation. I explain the difference between frontloading and backloading in this paper.

Atheist Joe Felsenstein clearly does not accept evolution to be an “unguided engine of accidents” and understands that it must be a front loaded process; this is evidenced in the posts I did on his comments about the work of IDists Dembski, Ewart and Marks (See here and here). Although Felsenstein may not think in terms of the spongeam he has nevertheless shown it to be arguable that conventional physics is responsible for the depository of information inherent in fitness surfaces. In the comments sections of my posts I have just linked to he writes:

If the laws of physics are what is responsible for fitness surfaces having "a huge amount of information" and being "very rare object[s]" then Dembski has not proven a need for Intelligent Design to be involved. I have not of course proven that ordinary physics and chemisty is responsible for the details of life -- the point is that Dembski has not proven that they aren't.

Biologists want to know whether normal evolutionary processes account for the adaptations we see in life. If they are told that our universe's laws of physics are special, biologists will probably decide to leave that debate to cosmologists, or maybe to theologians.

The issue for biologists is whether the physical laws we know, here, in our universe, account for evolution. The issue of where those laws came from is irrelevant to that.

Given that Dembski and colleagues appear to be working with an ancillary non-immanent concept of intelligence, Joe Felsenstein’s comments above are true:  Dembski and co, using their explanatory filter epistemic, work within an Intelligence vs natural forces conceptual framework. This frame work is valid for ancillary intelligences such as humans or aliens. Therefore should Joe Felsenstein plausibly demonstrate that imperative information in the laws of physics is efficacious in the generation of life the assumed position of many IDists becomes problematical. These IDists have implicitly committed themselves to the view that the imperative information in the laws of physics is classed as “natural” and is therefore incapable of generating life. The motive behind this position is that if physics should be unable to account for life (as they believe) they can then invoke their explanatory filter to argue that life is the work of an ancillary intelligence, or if you like an intelligence of the gaps. However, for Felsenstein the imperative information in physics is more likely to be due to some unknown physics of the gaps.

It should be noted here that some IDists believe the imperative information needed to generate life is provided in punctuated interventional installments rather than in one grand slam act of Divine fiat or by the one-off creation of the grand narrative of physics. See for example the posts here and here.

Tuesday, January 12, 2016

Melencolia I Part 7: Creating Information II

Part 7 of Melencolia I can be downloaded from here.  I reproduce the introduction to this paper below:

 Introduction

The de facto Intelligent Design community make claim to the notion that information is conserved, at least as far as so called “natural processes” are concerned. In particular, one of the founding gurus of the movement, William Dembski, has stated the conservation of information in mathematical terms. The aim of this paper is to investigate this claim and its limitations.

What is information? There is more than one valid answer to that question, but this paper will be looking at the kind of information defined as the negative of the logarithm of a probability p; that is, -log p .  This is the definition of self-information used by William Dembski. The obvious corollary here is that self-information increases with increasing improbability. The rationale behind this definition is that the lower the probability of an event then the more unexpected it is and therefore the more informative it becomes if it should occur; information is data you don’t expect, don’t anticipate and don’t know to be true; that is, you learn from it when it manifests itself to you and makes itself known.

As an example, consider a configurational segment of length n taken from a series of coin tosses. Given n it follows that the number of possible configurations of heads and tails is 2n.  If we assume that each of these possible configurations is equally probable then any single configuration will have a probability of 2–n. For large n this is going to be a very small value.  In this case our knowledge of which configuration the coin tossing will generate is at minimum; all possibilities from amongst a huge class of possibilities are equally likely. Consequently, when the coin tossing takes place and we learn which configuration has appeared it is highly informative because it is just one amongst 2n equally likely possibilities. The measure of just how informative that configuration is, is quantified by I, where:

I = -log 2-n = n log 2
(0.0)
Conveniently this means that the length of a sequence of coin tosses is proportional to the amount of information it contains.

Information, as the very term implies, is bound up with observer knowledge:  When calculating probabilities William Dembski uses the principle of indifference across mutually excluding outcomes; that is, when there is no information available which leads us to think one outcome is more likely than another then we posit a priori that the probabilities of the possible outcomes are equal. I’m inclined to follow Dembski in this practice because I hold the view that probability is a measure of observer information about ratios of possibilities. In my paper on probability I defined probability recursively as follows:


Probability of case C   =  Sum of the probabilities of cases favoring C /  Sum of probabilities of all cases

(0.1)

This definition is deliberately circular or, if you want the technical term, recursive. For a recursively defined probability evaluating it depends on the recursion terminating at some point. Termination will, however, come about if we can reach a point where the principle of indifference applies and all the cases are equally probable; when this is true the unknown cancels out on the right hand side of (0.1) and the probability on the left hand side can then be calculated.  From this calculation it is clear that probability is a measure of human information about a system in terms of the ratio of possibilities open to it given the state of human knowledge of the system. This means probabilities are ultimately evaluated a priori in as much as they trace back to an evaluation of human knowledge about a system; the system in and of itself doesn’t possess those probabilities.

It follows then that once an improbable event has occurred and is known to have occurred, the self-information of that event is lost because it no longer has the power to inform the observer; the probability of a known event is unity and therefore of zero information. But for an observer who has yet to learn of the event, whether the event has actually happened of not, the information is still “out there”. Information content, then, is observer relative.

Observer relativity means that self-information is not an intrinsic property of a physical system but rather an extrinsic property. That is, it is a property that comes about through the relation the system has with an observer and that relation is to do with how much the observer knows about the system. Therefore a physical system loses its information as the observer learns about it, and yet at the same time there is no physical change in that system as it loses this information; where then has the information gone? Does it now reside in the observers head?  But for another observer who is still learning about the system that information, apparently, remains “out there, in the system”.

Given the relativity of self-information, then treating it as if it were an intrinsic property of a physical system can be misleading. The observer relativity of self-information makes it a very slippery concept: As an extrinsic property that relates observers to the observed, a system can at once both possess information and yet at the same time not possess it!

This observer information based concept of probability is very relevant to the subject in hand: that is, to the cosmic generation of life. Given that the configurations of life fall in the complex ordered category it follows that as a class life is a very rare case relative to the space of all possible configurations. So, assuming the principle of indifference over the total class of possible configurations we would not expect living configurations to exist; this is because their a priori probability must be extremely small and therefore their self-information or surprisal value is very high: Living configurations are very special and surprising configurations.

But the fact is life does exist and therefore the a posteriori probability of instantiated life is unity. It is this intuitive contradiction between the a priori probability and the a posteriori probability of life that constitutes one of the biggest of all scientific enigmas.  In this paper I will attempt to disentangle some of the knots that the use of self-information introduces when an attempt is made to use it to formulate a conservation law. I also hope to throw some light on the a priori/a posteriori enigma of life.

Monday, December 28, 2015

Melencolia I Project Articles



I'm using this post to collect together the articles and papers I have produced for my Melencolia I series. I will update this post as I produce written items.  I will be using a link to this post to give access to the whole series.

The errors of the de facto Intelligent Design movement
The de facto ID community represented by the likes of websites such as Uncommon Descent and The Discovery Institute talk obliquely of a mysterious Intelligent Agent being the likely default means of explanation when our understanding of "natural forces" is (currently) unable to account for a phenomenon. Of course, everyone knows that these people are really talking about God and the IDists' studied detachment from theology comes over as an affectation, disingenuous even. Talking vaguely about "Intelligent causes", however, does give a scientific gloss to their work; after all, it is true that archaeology is in the business of separating out the "natural" from the "artificial". Moreover, if ever an obviously empirical situation should arise like that depicted in 2001 Space Odyssey, the question of intelligence and the nature of that intelligence would loom large in scientific circles. So arguably "Intelligent Design" is a little like archaeology and SETI and  therefore does have a prima facia claim to being  science. 

But of course we know that the de facto IDists are really thinking theologically and that is where lie their mistakes: They have in fact committed scientific, tactical and theological errors.  Their error is scientific because their epistemic filter is misconceived; this misconception  leads into a natural forces vs God dichotomy which in turn helps foster scientific blunders such as the claim that evolution is inconsistent with the 2nd law of thermodynamics. Their error is tactical because their pretense at doing science uncontaminated by theology is just that; a pretense and everyone, especially atheists, can see it. Their error is theological because God is both immanent and eminent and therefore He is immanent in natural forces. It follows then that we can seek God in those so-called natural forces and not just as an ancillary outside intelligent agent; or perhaps I should say that those "natural forces" are in God. For God is the eminent and immanent context of all that his authorship permits reification in the story He tells. The immanence of God means that he is of an entirely different genus to any ancillary intelligence such as man or aliens; if we are theologically turned on we don't expect ancillary intelligence to be a good model for God. 

In order to maintain a scientific gloss we find that IDists will often try to avoid mention of God in their works. Not only has this tactic miserably failed but I believe it is impossible for the Christian to carry on like this. If we are dealing with immanent intelligence and not just ancillary intelligence this subject cannot be approached without mention of the immanent Sovereign Manager and Creator. That's not a mistake I intend to make myself. My project is explicit about seeking the Sovereign Manager and Creator of our cosmos. I therefore make explicit mention of Him. Also, unlike the IDists I am not making strong claims of doing exclusively science (although some parts will be science) since my epistemology is more broad brush than spring extending and test tube precipitating scienceThis will mean that any atheist who dislikes the idea of a Sovereign God being at the heart of a study will not find grounds for accusing me of trying to pull the wool over his/her eyes. There is one thing worse than a deceiver and that is an incompetent deceiver who is unaware of his attempt at deceiving both himself and others.

So all in all I've become increasingly displeased with the de facto ID movement and their transparent facade of studied scientific detachment. But I'm in good company I don't think Sir John Polkinghorne is pleased with them either

See also: 
http://quantumnonlinearity.blogspot.co.uk/2015/10/why-i-repudiate-de-facto-intelligent.html


Main Papers and Articles.
Supporting and Relevant Articles
Configuration space Series
William Dembski’s views:
Felsenstein vs. Dembski
Felsenstein and English vs. Dembski, Ewart and Marks
http://quantumnonlinearity.blogspot.co.uk/2015/11/intelligent-designs-2001-space-odyssey.html

Thinknet Project Articles.


My Thinknet Project uses ideas taken from Edward De Bono's book, The Mechanism of Mind in an attempt to elucidate selected aspects of the processes of intelligence. Clearly a complete account of intelligence is well beyond our current science and consequently my terms of reference are satisfied if at least some inroads can be made into the subject. 

I'm using this post to collect together the articles and paper's I have generated for my "Thinknet" series. I will update this post as and when I produce written items. I will be using this post as a link to give access to the whole series.

ThinkNet papers


Supporting articles


On intelligence and incomputability

General article on procedural vs. declarative computation

Wednesday, December 16, 2015

The Nature of Intelligence

The question of the nature of Intelligence/sentience presents intellectual traps that are difficult to avoid; not least the problem of a nested regress!

This post on Uncommon Descent is a fine example of how, on the  Intelligent Design question, I'm probing an entirely different line of inquiry to the de facto IDists. The post is by Denyse O'leary who quotes one of the ID community's gurus, Robert Marks.

Firstly, setting the scene:

Anything algorithmic can be done by a computer. Give me a recipe for doing something, and I can whip it up in the kitchen. There are things which are not algorithmic the most celebrated of which is Turing’s halting problem: there exists no algorithm able tell whether or not a computer program runs forever or halts. (The halting algorithm must work for any and all computer programs.)

But a computer program will halt or won’t halt. But since there is no algorithm to figure this out, the halting problem is undecidable. We don’t know before running the program whether or not it will halt. It could run trillions of years and then halt long after we’re dead. If it doesn’t halt, we may never know (unless we know the so-called busy beaver numbers which is the same as knowing Chaitin’s number which is unknowable. But I digress.)

Clearly there are some elementary algorithms where we can prove they halt. However, the tenor of the halting theorem is that there is no general algorithmic procedure which takes any algorithm as a parameter and is then able to determine whether it stops. That is, we may be able to determine halting conditions in special cases, but not in the general case.  The halting theorem can be proved by showing  that when an attempt is made to submit a hypothetical general halting detection algorithm to itself as a parameter this results in a contradiction. Hence, some questions we submit to algorithms are incomputable. Incomputablilty has the potential to arise whenever an algorithm attempts to flag conditions about itself: Like the well known contradictions arising from Russell's paradox it is not possible for an algorithm in general to talk about itself without raising contradictions. 

But I digress. I'm actually more interested in the following quotes from Marks, quotes which bring to light something I've long suspected would be a position favoured by a de facto IDist like Marks. Basically it's another Intelligence-of-the-Gaps sentiment:

Lastly, Roger Penrose in Shadows of the Mind and The Emperor’s New Mind makes the case the human mind, through creativity and the creation of information, does nonalgorithmic things (and is therefore not merely a computer).

I am starting to believe creation of information requires a nonalgorithmic process, hence intelligent design.

This is not unexpected: As I have made clear many times on this blog, IDists like Marks are dualists who see a sharp distinction between "intelligent agency" and "natural forces". This dualism is embodied in the IDist's  explanatory filter; This filter ensures that when "natural processes" fail as an explanatory device a default is forced to "intelligent agent".  So it is no surprise to find Marks casting around for reasons why intelligence should be classed as an entirely different genus to "natural processes" and fundamentally different.  Marks thinks information cannot be created by "natural processes". This is an error in itself which I will be publishing on shortly. However, if you believe "natural forces" can't create information it is a very easy next step to posit that some kind of mystical unknowable process must be creating information and of course that process can only be the apparent inscrutability of intelligence! Wouldn't it be right then if intelligence fits in the category of the non-algorithmic?

I had anticipated long ago that IDists like Marks would settle on Penrose's proposal that intelligence is non-algorithmic. My own opinion is that this proposal is unlikely and I give my reasons for this here and here. All evidence suggests to me that the human mind is finite and therefore the ontology of human intelligence has the same reflexive limitations that give rise to the halting theorem and incomputability in general: Viz: human intelligence is based in a system that can not make certain general statements about itself without those statements invalidating the very conditions these statements are attempting to comment on; there are certain things we cannot know about ourselves. Ergo, human intelligence doesn't step outside the limitations of incomputability.

From my perspective I feel that the de facto IDists are more than welcome to explore this dualist line of inquiry whereby intelligence is categorized as a different genus of process capable of exploring the realm of incomputability; I'm not trying to stop them and I'm happy that they follow this very different line of inquiry; although if they are committed to this view of intelligence they will find progress difficult.

But the polarized dualist backdrop against which de facto ID plays out doesn't favour the intellectual nuancing needed to explain why some people follow one route and some another. Much more in line with de facto IDism's embattled community is the hunting out of fifth columnists and traitors and then hanging them out to dry; as we will see in my next post!

Thursday, December 10, 2015

2001 Space IDyssey

The de facto IDists I've come across believe the configurations of life have been patched in as might an artifact against a  backdrop "natural landscape" and therefore life is regarded as entirely anomalous against the background physics of the Cosmos. 

The following post has recently appeared on Panda's Thumb. It's a comment by Nick Matzke on  the debate between Panda's Thumb posters Joe Felsenstein and Tom English (FE) and IDists William Dembski, Winston Ewart and Robert Marks (DEM).

***

Game over for antievolutionary No Free Lunch argument
By Nick Matzke on December 4, 2015 10:58 PM | 121 Comments
This has been obvious from the start, but as far as I know it has taken 10 years for the ID guys to finally admit it. Winston Ewert writes at the Discovery Institute blog:
However, Felsenstein and English note that a more realistic model of evolution wouldn’t have a random fitness landscape. Felsenstein, in particular, argues that “the ordinary laws of physics, with their weakness of long-range interactions, lead to fitness surfaces much smoother than white-noise fitness surfaces.” I agree that weak long-range interactions should produce a fitness landscape somewhat smoother than random chance and this fitness landscape would thus be a source of some active information.
GAME OVER, MAN. GAME OVER! The whole point of Dembski et al. invoking “No Free Lunch” theorems was to argue that, if evolutionary searches worked, it meant the fitness function must be designed, because (logical jump herein) the No Free Lunch theorems showed that evolutionary searches worked no better than chance, when averaged over all possible fitness landscapes.
Emergency backup arguments to avoid admitting complete bankruptcy below the fold, just so I’m not accused of leaving out the context

--------------------------------------------------------FOLD------------------------------------------------------------------.
We disagree in that I do not think that is going to be a sufficient source of active information to account for biology. I do not have a proof of this. But neither does Felsenstein have a demonstration that it will produce sufficient active information. What I do have is the observation of existing models of evolution. The smoothness present in those models does not derive from some notion of weak long-range physics, but rather from telelogy as explored in my various papers on them.
As always, the ID objections to evolution, when stripped of pseudo-technical camouflage, boil down to “I just don’t buy it because (gut feeling).”
See also: recent PT posts and Jason Rosenhouse at EvolutionBlog.
***

FE did a good job of demonstrating the plausibility (but admittedly not "proof" as Ewart points out) of the idea that by means of fitness surfaces physics provides the information needed for evolution to occur. Although like Ewart I have reservations about fitness surfaces (doubts which I express here) I don't have an in-principle objection to the notion that physics (even if has to be modified) is implicated as the agency of biological information. This is where I differ from the view of the average IDist. They are likely to have an in-principle objection to any idea that physics could be the providential means by which evolution has been directed (And for standard evolution to work it must be channeled). But for the IDists I have featured in this blog the default view appears to be that the information for life has been patched-in ad hoc by God (and let's make no bones about just what de facto ID really means by "intelligence"). This divine ad hoc activity is not dissimilar to the way the intelligence behind the 2001 Space Odyssey monolith patched-in an artifact on the otherwise "natural" lunar landscape. The 2001 Space Odyssey ID paradigm treats biological structures as bolt on extras in the cosmic scene, extras that can only be explained by the activity of an auxiliary intelligence, much like alien artifacts. This view is at least in part driven by the explanatory filter epistemic. This epistemic leads to a very suspect theology where God works on the natural order rather than in the natural order.

There's not enough information in the above quotes to know whether Ewart's take on teleology leads him away from this god-of-the-gaps ad hocery or not. However, I suspect  that behind his rejection of FE's work lies de facto ID's standard false dichotomy of God vs. Natural forces. God works the way he works: If God works through the physics of fitness surfaces (leaving aside my doubts) then that's the way he works and we have to get used to it; I see no point in apposing biologists simply for the sake of it.

Relevant links:

Tuesday, December 08, 2015

Nice Guys Finish Last

I might agree with that!

During the nine years I have maintained this blog I have come across several nice guys; William Dembski and Tim Ventura are a couple of names I can mention straight away. But both these gentleman have ended up being taken for a ride, perhaps in part because they are prepared to give even rogues some leeway: See here and here.

Another nice guy is Paul Davies, professor of physics and science broadcaster. The good professor, although no doubt a very busy and clever man, took the time to reply to an email of mine that I wrote in January 2006 after reading his fascinating book "The Goldilocks Enigma". That brief correspondence can be seen here. Recently. however, the Prof has run into a bit of aggravation because he has been caught discipline trespassing;  he has been dabbling in biology!

In his no doubt genuine desire to lend a helping hand and move things along, Davies seems to have been completely and merrily unaware that his efforts weren't really welcome, mostly because he looks to have made a dogs-dinner of it! Davies and myself share an interest in the apparent physical anomaly of life, the question of its origins and also in the enigma of the very particular physical regime that pervades our cosmos; that's why I have avidly read several of his books. However, unlike myself Davies dares to get rather immersed in biological details, details that on his own admission he may not know a great deal about. Wiki quotes him as saying:

I had the advantage of being unencumbered by knowledge. I dropped chemistry at the age of 16, and all I knew about arsenic came from Agatha Christie novels. 

So are biologists so incompetent that they need someone who is completely uninitiated into their trade secrets to causally and easily breeze in and show them how it's done? Perhaps they do, but you can guarantee they won't like it anymore than the inhabitants of a spaghetti western town like the arrival of Clint Eastward! Moreover, many biologists are none too keen on what they perceive as physics hubris at the best of times!

Reading his Wiki page we find that typical of Davies very helpful persona he unwisely jumped in to assist Felisa Wolfe-Simon with her radical and risky "Arsenic can replace phosphorus" theory, a theory which according to some should be retracted. That theory seems to be in the Martin Fleischmann, cold fusion league; Farces like this make me wonder what right minded person would want to engage in risky blue skies intellectual endeavors and have a reputation to lose as well as the salary that pays the mortgage!

But for Paul Davies, no doubt a man of independent means, none of this has dampened his enthusiasm for biological dabbling. In his latest move he has taken up theorizing about cancer along with his physics colleague, Charlie Lineweaver. Davies and Lineweaver are developing a theory that cancer is a kind of recapitulation phenomenon where cells revert to an ancient ancestral phenotype; that is, they return to simple cell division and multiplication, the very basic activity of  the first life.

The cancer problem has similarities with the problem humanity has had in the search for a sustainable energy source; scientists have worked on both problems all my life and although there have been worthy advances in both areas there have been no panacea catch-all type breakthroughs. The log-jam here has provided a space for the paranoiac cranky conspiracy theorists who feel that someone somewhere must covering up what they know. The twists and turns of devious and fanciful conspiracy theorist logic has no compunction about dreaming up what the imagined Machiavellian conspirators might have to gain from such a cover up; the usual suspects involve control freakery and money making; there may even be an alien or two thrown in for good measure.

So in all in all Paul Davies and his colleague, clever and original thinkers though they may be, are unlikely to find their latest interest a walk-in-the-park between their "real science" (!) of pondering abstruse physics equations! In fact for Davies the walk-in-the-park has already turned into what is more like a walk in Jurassic Park! For none other than PZ Myers has been viciously savaging Davies ideas on the subject of cancer! The general thrust of Myers argument is that cancer is simply a case of corrupt DNA and that can take many forms; cancer is far too pathological, random and bizarre to be identified as a living atavism.

But the even bigger sin of Davies in the eyes of someone like Myers is that he's soft on the religion. He was awarded the Templeton prize in 1995 and overall he tends to be sympathetic toward a religious outlook (See Davies Wiki page and the quote in my picture above). In this connection here's how Paul responded to a question that I slipped in at the end of my correspondence:

My Question: The moral of the story may be that artifacts in one’s perspective have a bearing. As we know, Newtonian dynamics can be developed using the “teleological” looking extremal principles. But, of course, these are mathematically equivalent to the conventional view that sees one event leading to another in sequence without recourse to end results. It is almost as if the choice of interpretation on the meaning of things is ours to make! Thus, perhaps the way we personally interpret the cosmos constitutes a kind of test that sorts out the sheep from the goats! Which are you? Some theists (but not me, I must add!) probably think you are a goat, but then some atheists probably have the same opinion! Can’t win can you?

Paul Davies: I hate being pigeonholed, so I won't respond to the sheep/goats question.

Well, I think they've well and truly pigeonholed Paul whether he likes it or not! Both Christian fundamentalists and evangelical atheists are likely to have it in for him regardless! You just can't win can you? Specially if you are Mr. Nice-Guy!

Wednesday, November 18, 2015

Intelligent Design's 2001 Space Odyssey Style Search for Intelligence of the Gaps


Bad Theology: ID's search for intelligence might have gone off into the wild black yonder; but perhaps it was right under their noses all along.

In a post on Panda’s Thumb Joe Felsenstein continues the same debate with IDists which I looked at in the following posts:

(Also relevant to the material I present below is this link:

The above series of posts are an analysis of Joe Felsenstein and Tom English’s reaction to the work of IDists William Dembski, Winston Ewert and Robert Marks (DEM for short).  Below I publish quotes from Felsenstein’s latest post and as usual interleave them with my own comments. At the start of his post Felsenstein makes it clear that…

FELSENSTEIN: The issue is not the correctness of their [DEM’s] theorems, but given that they are correct, what flows from them. Dembski, Ewert, and Marks (DEM) may object that they did not say anything about that in their paper….
We don’t think that it is a stretch to say that DEM want their audience to conclude that Design is needed.
Let’s look at what conclusions Dembski, Ewert, and Marks draw from their theorems. There is little or no discussion of this in their paper. Are they trying to persuade us that a Designer has “frontloaded” the Universe with instructions to make our present forms of life?

My Comment: I think I largely concur with that: As I constantly say on this blog, de facto ID is essentially God-of-the-Gaps (although they will deny it), or perhaps in this instance “God-of-the-frontload”. If as Felsenstein says DEM don’t discuss in their paper the origin of the information required to generate life that may be because DEM believe the major part of their epistemic task complete. The epistemic procedure of de facto ID’s “explanatory filter” prompts a default to “intelligent design” if no “natural causes” can be found. To this end DEM’s paper has the role of locating an explanatory gap which they know all too well will be filled in by their followers; as Felsenstein says: “Are they trying to persuade us that a Designer has “frontloaded” the Universe with instructions to make our present forms of life?” But the explanatory filter epistemic as formulated by Dembski and used by his ID community has its limitations, especially in theology.


FELSENSTEIN 1. Their space of “searches” includes all sorts of crazy searches that do not prefer to go to genotypes of higher fitness – most of them may prefer genotypes of lower fitness or just ignore fitness when searching. Once you require that there be genotypes that have different fitnesses, so that fitness affects survival and reproduction, you have narrowed down their “searches” to ones that have a much higher probability of finding genotypes that have higher fitness.

2. In addition, the laws of physics will mandate that small changes in genotype will usually not cause huge changes in fitness. This is true because the weakness of action at a distance means that many genes will not interact strongly with each other. So the fitness surface is smoother than a random assignment of fitnesses to genotypes. That makes it much more possible to find genotypes that have higher fitness.

In short, with their theorems, Design is not needed to explain why a reproducing organism whose genotypes have fitnesses might be able to improve its fitnesses substantially. Just having reproducing organisms, and having the laws of physics, gets an evolving system much farther than a random one of DEM’s “search

My Comment: Firstly, there is a good reason why DEM must consider the whole domain of possible searches. The point of their whole exercise is to show that whatever be the “search” (or better “process”) behind the generation of life in our cosmos, then within the full set of possible Dembskian searches it must be a very special case; that is, it is highly a-typical. The conclusion, then, is that given the principle of equal-a-priori-probabilities the cosmic search must be a highly improbable case and therefore of high information. So, I myself can see the point of this enumeration of the entire domain of possible searches. And yet as I have discussed in the previous parts of this series Felsenstein is also right; Viz: if one posits a) a differential in the fitness of possible configurations and b) our particular laws of physics which smooth out the fitness surface, then it follows that the cosmic physical regime goes a long way to providing the front loaded information needed for the generation of life. 

However, I must register here my dissatisfaction with the fitness surface model. In this post I gave reasons why this model makes implicit assumptions about the survivability and reproducibility of the organic structures that respond to the fitness surface. In consequence far more fundamental than the fitness surface is the mathematical object I call the “spongeam”. This is a conjectured fully connected but extremely tenuous sponge-like-set in configuration space. This conjectured abstract structure is defined by the requirement that it is composed entirely of (organic) forms which are stable and complex enough to survive and  replicate.  The point is that these forms need not be very fit, but nevertheless must be fit and complex enough and sufficiently connected to allow some kind of evolutionary diffusion (by replication) across the conjectured channels of the spongeam. In fact some regions of the spongeam may not even have any fitness slopes at all; the fitness could be unchanging across the spongeam in those regions. In these "flat" regions evolutionary diffusion will be unbiased although in other regions where fitness changes the diffusion will be biased. In the latter regions the idea of “sloping fitness surfaces” will apply. It follows then that “fitness” is not as fundamental as the spongeam; different levels of fitness may or may not be superimposed on the spongeam. If Felsenstein is right then it is the spongeam which is implicitly frontloaded into the cosmos via our physics. The final twist here, however, is that I don’t think the spongeam exists; therefore neither do fitness surfaces. (See the post I have already linked to)

Felsenstein quotes Dembski:

DEMBSKI: The term “evolutionary informatics” was chosen deliberately and was meant to signify that evolution, conceived as a search, requires information to be successful, in other words, to locate a target. This need for information can be demonstrated mathematically in the modeling of evolutionary processes. So, the question then becomes: Where does the information that enables evolutionary searches to be successful come from in the first place? We show that Darwinian processes at best shuffle around existing information, but can’t create it from scratch. [As it turns out this latter statement by Dembski doesn't do justice to the subject as I intend to show in due course - TVR]

I see this work as providing the theoretically most powerful ID challenge against Darwinian evolution to date. As for the attention this work has garnered, there has been some, but Darwinists are largely ignoring it. I’m justified in thinking this is because our methods leave them no loopholes. We’re not saying that evolution doesn’t happen. We’re saying that even if it happens, it requires an information source beyond the reach of conventional evolutionary mechanisms.

My Comment: The first paragraph here basically concurs with what I have just said: For evolution to successfully generate life there must be some kind of informational “frontloading” (Unless we are to accept interventional tinkering). Felsenstein is saying that this information is probably implicit in the laws of physics, laws which imply a fitness surface smooth enough for conventional evolution. Felsenstien might be right (although as I have said I personally have reservations about this conclusion). Interestingly, Dembski says We’re not saying that evolution doesn’t happen and is in effect admiting that evolution could conceivably be tapping into information from somewhere, perhaps the spongeam. So even if evolution does occur DEM’s conclusion that a practical and successful search requires a priori information still applies. And Felsenstein would agree!

You might think, then, that Dembski has got the “materialists” into a “heads I win tails you lose” impasse. But generally IDists are unwilling to exploit this advantage because lurking in the background is Western dualism, a dualism embodied in Dembski’s explanatory filter and which implicitly sets natural forces against divine intelligent design. It is therefore dangerous for IDists to even admit that evolution might be sufficiently provisioned with the requisite information (presumably via the spongeam, perhaps) to do the job; for if they do, then cranking the handle of the explanatory filter leads to an embarrassing answer. This has the effect of making “evilution” taboo in the ID community.  This is why in the second paragraph Dembski says:

I see this work as providing the theoretically most powerful ID challenge against Darwinian evolution to date.

“Yes” and “no” to that Dembski! “Yes” if you are going to depict “Darwinian evolution” as the straw man caricature of an unguided process, as do some interlocutors on both sides of the debate. And “No” if one understands, and certainly Felsenstein understands it, that the “fitness surfaces” which may be implicit in physics, provisions evolution with the requisite directional information.

Before I proceed with the next quotation I need to make the following disclaimer. I don’t accept the habitual assumption of the de facto ID community that natural processes “can’t create information” and that information only emerges from the mysterious black box of the so-called “intelligent agent”. This de-facto ID error is bound up with what is likely to be a misconception about the nature of probability. In fact my latest work (which I hope to post in due course) suggests that the creation of information is exactly the intended role of those so-called “natural processes”.....watch this space.  In an earlier post here I explore some of the complexities of the information concept which impact this matter.

Felsenstein also quotes Robert Marks

MARKS: By looking to information theory, a well-established branch of the engineering and mathematical sciences, evolutionary informatics shows that patterns we ordinarily ascribe to intelligence, when arising from an evolutionary process, must be referred to sources of information external to that process. Such sources of information may then themselves be the result of other, deeper evolutionary processes. But what enables these evolutionary processes in turn to produce such sources of information? Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality’s ability to produce the required information. Evolutionary informatics, while falling squarely within the information sciences, thus points to the need for an ultimate information source qua intelligent designer.

My Comment: Firstly let me say that the average reasonably intelligent, yet non-technical Christian will be completely amazed and fazed by the likes of gurus like Dembski, Ewart and Marks and unable to ferret out the weaknesses in their position. It all looks oh-so-technically-expert and this in itself is heart-warming and reassuring to the average guru follower who can connect with the dualist idea that only black-box-intelligence creates information. And yet there is a deep issue with what Marks says above. Given Marks’ habituated mode of thought it doesn’t enter his head that in any practical sense of the word so called “natural processes” can create information. Instead he sees conversation of information working much like energy conservation.  From the perspective of Marks’ dualistic habits of mind it is taken for granted that physical causality is wholly different from the “intelligent designer”. To him and others in the de facto ID community the designer is the mysterious and analytically indivisible entity sourcing information at the end of his information regress. It never occurs to him to make the connection that perhaps physical causality may be that intelligence at work.

At one point Felsenstein quotes a question by ID supporter Casey Luskin.

LUSKIN: What is Active Information, and why does it point to the need for Intelligent Design to solve a problem, rather than an unguided evolutionary process? ……..Well, we appreciate the work that you [Marks] are doing and the papers that you’re publishing analyzing many of these evolutionary algorithms and asking whether they support a Darwinian view of life or an Intelligent Design view of life. (My emphasis)

My Comment: If the spongeam and the fitness surfaces which ride on its back exist, as Felsenstein thinks they do, then "Darwinism" is certainly not unguided!  DEM’s work in fact shows that conventional evolution cannot be unguided. It is ironic that the B-teams on both sides of the debate err on this notion of unguided evolution  - see here for example.

Felsenstein quotes Ewart:

EWART: While some processes are biased towards birds, many others are biased towards other configurations of matter. In fact, a configuration biased towards producing birds is at least as improbable as birds themselves, possibly more so.

Having postulated Darwinian evolution, the improbability of birds hasn’t gone away; we’ve merely switched focus to the improbability of the process that produced birds. Instead of having to explain the configuration of a bird, we have to explain the configuration of a bird-making process.


My Comment: This is certainly true and this is what DEM have successfully shown. And yet there is a deep implicit issue embedded in Ewart’s statements as to the significance of his claims. It is on that significance which the de facto ID movement is going astray.  The ulterior motive behind the above, a motive which is clear to Felsenstein and myself, is that Ewart thinks he is paving the way for the explanatory filter to default us to the “intelligent agent”, whatever he means by that. The big problem, as I will be proposing in my latest work, is that intelligence too classifies as a highly improbable configuration and this fact points to a major loophole in the work of the ID gurus.

How does Felsenstein react to Ewart’s statements?....

FELSENSTEIN: This example leaves it unclear what the “process” is. The reader may be tempted to conclude that it is the process that models an evolving population. And then the reader may think that if this evolutionary process succeeds in improving fitness, that some outside force is needed to set up the process so that it succeeds. But for their theorem to apply, the processes considered must include processes that make no sense as models of evolution. Processes that wander around among genotypes randomly, without being more likely to come up with higher fitnesses. Even processes that prefer to find genotypes with lower fitnesses. All of those are among the processes that must be eliminated before we get to processes in which genotypes have fitnesses, and those fitnesses affect the outcome of evolution.

My Comment: As I have already said DEM have rightly included all the possible searches in their enumeration and that includes all the highly disordered searches which practically speaking are fruitless. Disorder, by definition, has an overwhelming statistical weight and therefore a successful evolutionary search is a very rare case when set against the class of disordered searches.  Using the principle of equal a priori probabilities it follows, then, that a practical evolutionary process is a highly improbable search and this by definition implies a high information object. But then Felsenstein is also right; he presents a good prima facia case that physics implies different levels of fitness and a smooth fitness surface, which is where the information required by DEM lies according to Felsenstein. (Although I must once again register my reservations about the existence of the spongeam on which the existence of the fitness surface depends,)

Felsenstein further comments on Ewart:

FELSENSTEIN: In his reply, Ewert invokes the smoothness of the fitness landscape, and considers the smoothness to result from “laws or self-organization”

(EWART): *Quote* It is not sufficient to invoke the three-fold incantation of selection, replication, and mutation. You must also assume a suitable fitness landscape. You have to appeal to something beyond Darwinism, such as laws or self-organization, to account for a useful fitness landscape. *Unquote*

He does not seem to realize that those “laws” might simply be the laws of physics, and that the “self-organization” can simply be self-reproduction, something that all organisms do.

My Comment: Although DEM are right in asserting that any working conventional evolutionary process must have, a priori, a high information content, it is notable that Ewert doesn’t acknowledge that conceivably this information could, as Joe Felsenstein plausibly maintains, reside in the rarity of our familiar physical regime. One might think that by admitting this as a possibility at least, the IDists could have their cake and eat it; they could even claim that Felsenstein is admitting the existence of “active information”!  But no, the de facto ID movement has painted itself into a corner here: For if something along the lines Felsenstein is suggesting could be satisfactorily demonstrated then not only would that bugbear of ID, the explanatory filter, stab IDists in the back, but the whole thrust of IDism, which has been unequivocally against any hint of “Darwinism”, would make it look as though they have been defeated. The IDists have fostered the fearsome dualist spectre that if those loathed “natural forces” are doing the creation job all along then ID is worsted. Notice Ewert’s reference to so-called “self-organization”, a vague concept which has yet to produce any substantive input into the evolution debate.  And yet if Felsenstein is right, the solution could be staring the IDists in the face; namely, that if the requisite fitness surfaces are implicit in physics then in effect common-or-garden physics is doing the job of “self-organisation”. (Assuming the existence of the spongeam I must add). Even though this outcome to the debate would still be consistent with the work of DEM, such an outcome would cloud the tribal-clarity of the IDists shrill anti-Darwinist rallying call, a call which appeals to the dualist thinking of every Christian sect between here and the Watchtower’s Brooklyn HQ.


FELSENSTEIN: It is clear from these examples that Dembski and Ewert mean their theorems to be read as evidence for an Intelligent Designer either frontloading the evolutionary process, or for an Intelligent Designer intervening in it. But Tom English and I have shown that their Active Information can come about without that. It can come about simply by having a reproducing organism which has different genotypes, which have different phenotypes, and these have different fitnesses. And further Active Information can also come about by the predisposition of the laws of physics to bring about fitness surfaces smoother than “white noise” fitness surfaces.

Could that Active Information be enough to explain the evolution of, say, a bird? Do they have some argument that further “configuration of a bird-making process” is needed beyond that? There is actually nothing in their argument that requires that there be further Intelligent Design

My Comment: Yes, given the work of DEM it does follow that the generation of life demands frontloading; either that or the ad hoc fiat of tinkering and intervention. Felsenstein is plausibly maintaining that the mutually acceptable frontloading is down to physics.  Essentially then DEM and Felsenstein aren’t at odds; for they both see the need for some kind of frontloading (If not interventional tinkering). But they disagree over the significance and meaning of this fact.  What makes the situation more complex is that for reasons I’ve already outlined the IDists are unwilling to admit that this frontloading could be down to common-or-garden physics; they’d much prefer their opponents to talk of some exotic and speculative “self-organization”, a vague idea which currently has little real intellectual traction. But for polemical reasons the IDists are unwilling to entertain the prosaic physics solution; a choice imposed on them by the dualism implicit in the explanatory filter which excludes any middle ground in their intelligence versus natural forces dichotomy. The IDists have committed themselves to the idea that some special ingredient X is needed for life to exist. Felsentstein says that that ingredient could well be the physics we all know and love. The IDist is inclined to say “no!” to that because otherwise it would cut across his anti-darwinist raison d'etre he has fashioned for himself. For the IDist ingredient X is likely to be thought of as some inaccessible “black box” intelligence and not mere prosaic physics; to admit the latter would be a terrible anti-climax to the de facto IDists' 2001 space odyssey message; namely, that they have found an artifact not created by common-or-garden “natural forces”;  this is in spite of the fact that even if Felsenstein is right the IDist still have a case to argue!

The ID community's loathing of "Darwinism", even if it actually doesn't directly cut across DEM's ideas, nevertheless, goes deep enough to cause division within the ID community. See for example this post by Vincent Torley on Uncommon Descent where in the comments section Torley is accused of supporting “Darwinism”. See comment 86 where we read: It almost seems as if VJ Torley is turning Darwinist on us. Someone please correct me if I’m wrong. Regardless of whether it is consistent with conservation of information ideas or not as a rule the average right of centre ID follower hates "Darwinism" and can not abide by it.

The added irony is that Felsenstein himself takes for granted the same dichotomy of intelligence vs. natural forces. Given his outlook on life it is likely, of course, that he believes “natural forces”, whatever that means, have done the job of evolving life.  Since he has shown (plausibly) that physics could be the seat of so-called active information, then his conclusion, as per the explanatory filter, is that intelligent agency is not required as an explanation. He, like his IDist antagonists, sees it as a straight choice between natural forces and God. Felsenstein is a dualist in his conceptual categories when it come to thinking about God. As Felsenstein says above: "But Tom English and I have shown that their Active Information can come about without that", and by "that" Felsenstein means an "Intelligent agent". For him physics trumps intelligence.



Epilogue

Even if Felsenstien is winning the argument this still leaves us with the question of Why our particular physical regime with its miraculous fine tuning? For IDists, of course, this is the work of the God of the Gaps, but for Felsentstein it’s probably the work of Physics of the Gaps, perhaps some kind of multiverse. But whatever way we look at it, finite chains of human logic will always leave an inevitable grand logical hiatus unfilled. The irrational arbitrariness of an impenetrable wall of brute fact contingency faces us at the end of our quest for obliging reason; positing neither physics nor intelligence will rid us of this super gap (But see appendix).  Therefore I suggest we leave it and get back to the thing we do best and that is to describe the cosmos we have been provisioned with using the intellectual tools the good Lord has also provisioned us with. 

From where I’m standing the results of Dembski, Ewart and Marks are starting to look like a misinterpreted mathematical trivialism, I hope to expand on this topic in later posts. What ID is missing is that those much despised so-called “natural processes” are actually provisioned, in any practical sense of the word, to do exactly what IDists dread and fear in their darkest dreams; namely to create information. But then why should a Christian be surprised at that? God is immanent in his world.



Appendix (Added 21 Nov)
Is there any hope that the finite human mind could ever grasp the concept of Aseity? Two lines of inquiry respectively from the atheist and theist camps might be as follows:

Atheistic Aseity: This line or argumentation might be based on some kind of super-copernicanism; that is, the super-multiverse where all options are somehow realised, an idea having its strongest form in Max Tegmark’s mathematical universe. Because everything exists in the super-verse then it follows that everything has an existence probability of unity. The Shannon “suprisal value”, that is, the information value of the existence of any particular state of affairs then sinks to zero. Since the human intellectual demand for explanation comes in large part from our intuitive sense of surprise as to why particular things are as they are, then it may be argued that super-copernicanism goes someway to assuaging our sense of surprise at apparent contingencies;  for in the super-verse nothing conceivable is given preferential existential treatment; the only surprise left is why there is something rather than nothing. But it might be argued that if everything exists it is no more surprising than everything failing to exist at all!

Regarding the epistemic question as to why human beings can know anything at all in such an indifferent and dispassionate universe it might be argued that in a universe of indifference we aren’t going to be specially targeted for deception; hence errors average out and we can be reasonably sure that we can acquire knowledge about somethings if not everything. To claim that we could know nothing in an impersonal universe is tantamount to the inverted conceit of the conspiracy theorists who believe that they are being specially targeted for deception. One thing to be said for Copernicanism is that it seems to be an antidote to the narcissism of fundamentalist paranoia!

However there are problems with this view: Namely, the simulation argument and why we know as much as we do; we would expect the universe to be far more random and unknowable if some form of super-copernicanism held sway.

Theistic Aseity: This line of thought is potentially much more fruitful to my mind. Early on in my intellectual career I was attracted to positivism; the general idea that everything swings on observer experience to a high degree; in fact strong positivism suggests that all else besides experience is meaningless. Strong positivism is counter intuitive when it comes to in-practice and in-principle realities that cannot be experienced like the planets of distant galaxies or other minds. But nevertheless positivism has left me with the general feeling that without the presence of an experiencing sentience to apprehend it in some way “reality” is a meaningless and incoherent idea. This view is clearly related to Berkeley’s idealism. So, if reality is meaningless without a sentient apprehender then the organised high complexity of the cosmos immediately follows: The experiencing sentience has to be sufficiently complex in order to possess the coherence needed to cognitively apprehend the cosmos, But since coherent human observers are composed of the very stuff of the cosmos, then it follows that the cosmos must be sufficiently organised and complex to support the human sentience that apprehends it. When human's describe the cosmos they are in effect describing  themselves. I advance a related idea in the introduction of my book Gravity and Quantum Non-Linearity. Viz; that conscious sentience is described in its own terms, much like a computer language compiler is written in the language it compiles.

The foregoing line of thought is essentially the strong anthropic principle. It attempts to show that sentient observers are logically necessary because a cosmos without them is regarded as an unintelligible  notion. These prototype ideas on the aseity of sentience may throw light on the aseity of God.

  Atheistic visions of the cosmos which are founded on the elementary elemental such as bits and particles will always face a logical hiatus: Simplicity is simply too simple to self-explain. (I touch on this idea of elementary elementalism being unable to self-explain in the following posts:
http://quantumnonlinearity.blogspot.co.uk/2012/12/paul-nelson-computer-simulations-and.html)


Appendix II

Without the spongeam conventional evolution is a non-starter