Pages

Thursday, October 22, 2009

The Glasshouse Effect


Just a couple of comments on two blog posts I want to make a note of:

Uncommon Descent.
This post on Uncommon Descent made me laugh. I can understand UD’s feeling that it’s one law for them and another for established scientists! The papers the post refers to can be found here and here. What do I file these papers under? Hoax Physics, Crank Physics or just “Edgey”? Whatever; it appears that they have already been classified by their peer group as professional physics.

Sandwalk
This looks to be quite a promising departure on Sandwalk. It is a link to Jim Lippard’s blog. Lippard is an atheist, but he seems to be asking those reflexive epistemological questions that we should all be asking: What is science? How do we know things? Why do we believe? Etc. Given that questions over the primary ontology of the cosmos is a highly speculative area,bound up with philosophy and epistemology, it is only natural that people attempting to arrive at preemptive proposals/views in this area will tend to come up with different speculations ranging from theism, simulated universes, multiverses, self organizing principles, postmodern nihilism etc. Where things go wrong, I feel, is when particular proposals become the jealously guarded property of some subcommunity which then, with religious zeal, embarks on the hard oversell of their views.* In particular, I cannot go along with Christians who take the position that atheists must all have bad consciences because they are willfully holding out against some Gnostic or fideist revelation. Atheism, in my view, is just one of a set of plausible constructions given the human predicament. Conversely, I must set against this those atheists who frequently accuse others of irrationality and yet themselves appear to be unself-critical, and lacking in both reflexivity and self-scepticism.

But the good news is that Lippard appears to be asking the right kind of questions, questions that both hardened atheists and religionists need to be asking themselves. A step in the right direction I feel.


Addendum: I notice that Larry Moran of Sandwalk says he hasn't the time to debate Lippard. It may be just me but haven't I noticed before that he's stayed well clear when things start to get edgy? The "fringe" isn't for him and I suppose that we need people like him to keep things anchored. But there is more to scepticism than bread and butter science - and "bah humbug".

* Footnote
To be fair this is probably an effect of communities polarizing against one another; if one community makes signals indicating what they think of as their spiritual/intellectual superiority in comparison with another community, the latter community, tit for tat, will respond in kind.

Monday, October 19, 2009

The Neurological Problem

The Following is a comment that I intended to put on James Knight’s bog here but for some reason I was denied the paste option, so I’ll have to post it here instead:

I think this is what they call the “hard problem” – that is, just how does the subjective first person perspective of “qualia” marry with the (putatively) objective third person perspective? The latter perspective only recognizes first person mental processes as neuronal activity. In fact when the third person gets up close to the first person he only ever sees neuronal activity.

This issue, especially amongst materialists, is often cast in a mold which assumes the third person perspective is somehow more fundamental and “real” than the seemingly private world of first person qualia. This position is apt to overlook the fact that third person perspectives also necessarily involve a sentient agent, an agent that does the observing and reports in third person language; it’s a position that vaguely resembles the N-1 test for egocentricity or those fly on the wall documentaries where the necessary presence of the camera man is all too easily forgotten. In the final analysis the third person perspective entails an unacknowledged first person perspective. This failure to count the third person encourages the construction of a materialist ontology that uncritcially by passes the implicit assumption of sentience entailed by the observations needed to construct a third person narrative.

As both first and third person perspectives necessarily involve persons, the “hard problem” in actual fact seems to amount to this: Just how do the qualia of the first person register themselves in the qualia of the third person? That is, what does the third person observe (i.e. experience) when he attempts to investigate the first person's qualia? The answer to that is that the third person, when he takes a close look, only ever observes these qualia as neurological activity. How then do the first person qualia map to this activity?

Regardless of the question of just whether or not the third person perspective reveals a fundamental and primary materialist ontology behind qualia, it seems to me that the human perspective is forever trapped in the dichotomy of two stories; the “I-story” of first person qualia and the “It-story” of third person observations of other people; this latter story is told in terms of human biology.

I like the proposal that there is point by point conformity between these two stories; that is, for every first person experience of “qualia” there is a corresponding event in the third person world of “materials”. (However, I wouldn’t want oversell this idea, or be too committed to it)

With reference to the latter proposal it is worth recalling that neurologically speaking the mind probably has a chaotic dynamic, and thus is sensitive to the butterfly effect. Therefore even though we may eventually come to tell and understand the principles of the "I-story" in terms of neurogical activity (as we might fancy we understand the principles of weather) it looks as though under any circumstance minds, like the weather, will forever generate effects of which we will have no clue from whence they came.

Friday, October 16, 2009

A Proposed Problem Solution

This post concerns a little computational conundrum that I been pondering for some time. I now think I may have a handle on the solution.

Firstly some preamble.

Let’s assume that we can represent a general computation using a model similar to one I suggested in this post as follows:

Imagine a very long binary string – long enough to express any kind of computer output or result required: Now imagine all the possible binary strings that this string could take up and arrange them into a massive network of nodes where each node is connected to neighboring nodes by a incremental change of one bit. Hence, if the binary string has a length of Lr (where r stands for ‘result’) then a given binary configuration will be connected to Lr other nodes where each of these other nodes is different by just one bit.

Given this network picture it is possible imagine an ‘algorithm’ tracing paths through the network, where a path effectively represents a series of single bitwise changes. These paths can be described using another string, the ‘path string’ which has a length represented by Lp.

The essential idea here was that algorithmic change involves a path through a network of states separated by some kind of increment; in this case a bit flip. Since I wrote the above I have stayed with the general idea of a network of binary patterns linked by some form of increment, with algorithmic change effectively being a route through this network. However, in the above model the patterns are networked together by bit flipping – this seems a fairly natural way of linking the binary patterns, but as I have thought about it I have come understand that the method of linking the patterns depends on the computing model used; in fact the computing model also seems to effect the set of patterns actually allowed and even just how the “cells” that host the binary bits of the pattern are sequenced. Once these network variables are specified it is only then that we can start considering just what the computing model thus defined can be programmed to do. In this network model of computation I envisage both the running of the computation engine, its program and its workings in memory to be uniquely represented by network patterns; thus part of a particular pattern will represent the computation engine’s state and its program.

So how does this pan out for say a Turing machine? The configurational state of the Turing machine’s tape in terms of allowed characters seems to have no limitation – all configurations of characters are allowed. But it seems that not every network transition is allowed. If we take into account the fact that we must represent the position of the machine on the tape as part of a network pattern, then it is clear that arbitrary pattern transitions can’t be made; a character cell on the output tape cannot be changed if the machine is not at that position. There will also be some limitations on the actual patterns allowed, for depending on how we are to code the machine’s programs, we will expect syntax limitations on these programs. So a Turing computing model puts some constraints on both the possible network configurations and the network transitions that can be carried out.

Another point to make here is that what classifies as an incremental change in one computer model may involve a whole sequence of changes in another. Compare for example a Turing model with a “counter program” computing model. The Turing model is fairly close to my bit flipping model; changes in the tape involve simple character swaps. However, in a counter program the variables are numbers and the increments are plus or minus 1. This means that the character patterns of two numbers separated by the increment of 1 can look very different; e.g. 9999 + 1 = 10000. Thus, in counter programs two patterns separated by an increment of 1 or -1 may have more than a one character difference.

So what is my problem and how does the forgoing preamble help?

The problem I have pondered for some time (for a few years actually!) arises from the fact that there exists a class of small/simple programs that can in relatively quick time generate disordered patterns. (Wolfram’s cellular automata pictures are an example). It follows therefore that there is a “fast time” map from members of this class of a small programs to members of the class of disordered patterns.* But the class of disordered patterns is far, far greater than the number of short programs. Hence we conclude that only a tiny subset of disordered patterns map to “fast time” small programs. Now here’s the problem: Why should some disordered patterns be so favoured? The link that a relatively small class of disordered patterns have to simple mathematics grates against my feeling that across the class of disordered patterns there will be symmetry and no disordered pattern should be specially favoured.

My current proposed solution to this intuitive problem is as follows. The computing model is a variable and hence it is very likely that different computing models will favour different sets of disordered patterns with a map to fast time generation algorithms. In fact my guess is that the computing model variable has such a large degree of freedom that it is possible to find a model that will generate any configuration in fast time given the right model. The computing model variable thus restores symmetry over the class of disordered configurations. But in order to cover the whole class of disordered configurations with fast time maps where does the enormous variability of the computing model come from? It seems to come from several sources: One, as we have seen above, is the way the computing model wires the network of increments. Another degree of freedom is found in the sequencing off the patterns in the network: A Turing machine, via its quasi continuous movements up and down a tape of cells, effectively imposes a particular order on the cells of the output. It is possible to imagine another Turing machine that has an entirely different cell ordering and thus it would access an entirely different set of disordered configurations in fast time. I suggest, then, that the possibilities of the computing model look to be enough to cover the whole class of disordered configurations with fast time maps, thus restoring the symmetry I expect.

* Footnote
The “Fast Time” stipulation is important. Some single simple programs can, if given enough time, eventually generate any selected pattern and thus on this unrestricted basis a simple program maps to all output configurations.

Thursday, October 15, 2009

Does Alpha's Poll Exist?


Characters of the Wild Web: Myers’ raiders ransacking the Alpha Course web site.

I notice that PZ Myers' raiders are out and about again causing gratuitousness grief. This time they have completely wrecked an Alpha course poll! (see http://scienceblogs.com/pharyngula/2009/10/alpha_pollalready_demolished.php) Here's the Alpha poll page last time I looked:

Does Alpha's poll exist? No, thanks to Myers' invasion.

It think it's going to be pretty "Nicky Grumble" at Alpha HQ when the good Rev sees this. PZ had better watch out, Rev Gumbel has quite a few troopers behind him as well - Just how many supporters have been through Alpha courses? (I'm not one of them I must add)

Tuesday, October 13, 2009

The End of Puzzlement?

The juggling act in which William Dembski has to engage in order to maintain popular appeal amongst a broad church ID/YEC movement may be evidenced in his UD post advertising his latest book “The End of Christianity”. I’ve always been rather puzzled by his position and the following quote taken from the UD post only compounds my puzzles: “Even though argument in this book is compatible with both intelligent design and theistic evolution, it helps bring clarity to the controversy over design and evolution.”

Dembski gives every impression of being a very nice guy, but over in America this evolution/ID debate sometimes resembles a kind of football culture with star players getting money and accolades, and supporters fanatically sold out to their respective sides. I’m unsure about whether or not Dembski’s position is compromised by having an adoring “fan club” and, who knows, wealthy benefactors as well. Is fan club driven science a good atmosphere for a dispassionate perspective on the cosmos, a perspective that is far healthier when one is prepared to face one’s demons rather than one's fans? Will I buy the book? I’ll have to think about it.

Mr. Deism Speaks Out.

In this little piece (which includes a YouTube video) by PZ Myers, two points are worth noting:

a) The reference to evolution as a cruel and inefficient process

b) That evolution is a process which once started means that deity doesn’t need to expend further effort or even lift a finger to assist.

The irony is that many an ID/YEC pundit would agree on both points! That’s why atheist’s love “natural” evolution and ID/YEC’s hate it.

Wednesday, October 07, 2009

Darwin Bicentenary Part 27: The Mystery of Life’s Origin (Chapter 7)



I have been busy looking at the three online chapters of “The Mystery of Life’s Origin”, a book written by anti-evolutionists Thaxton, Bradley and Olson. The first of these chapters (chapter 7) introduces itself thus:

It is widely held that in the physical sciences the laws of thermodynamics have had a unifying effect similar to that of the theory of evolution in the biological sciences. What is intriguing is that the predictions of one seem to contradict the predictions of the other. The second law of thermodynamics suggests a progression from order to disorder, from complexity to simplicity, in the physical universe. Yet biological evolution involves a hierarchical progression to increasingly complex forms of living systems, seemingly in contradiction to the second law of thermodynamics. Whether this discrepancy between the two theories is only apparent or real is the question to be considered in the next three chapters.

In another passage we read:

It is often noted that the second law indicates that nature tends to go from order to disorder, from complexity to simplicity. If the most random arrangement of energy is a uniform distribution, then the present arrangement of the energy in the universe is nonrandom, since some matter is very rich in chemical energy, some in thermal energy, etc., and other matter is very poor in these kinds of energy. In a similar way, the arrangements of mass in the universe tend to go from order to disorder due to the random motion on an atomic scale produced by thermal energy. The diffusional processes in the solid, liquid, or gaseous states are examples of increasing entropy due to random atomic movements. Thus, increasing entropy in a system corresponds to increasingly random arrangements of mass and/or energy.

Thus far Thaxton, Bradley and Olson hint at a possible conflict between evolution and the second law of thermodynamics. At this stage, however, TB&O don’t claim an outright contradiction but rather an intuitive contradiction that needs to be investigated. Their caution is justified because there is a basic incoherence in TB&O’s statement of the apparent problem. They suggest a parallel between order and complexity, but the fact is that the most highly ordered systems, like say the periodicity one finds in a crystal, are the very antithesis of complexity. They also suggest a parallel between disorder and simplicity whereas, in fact, highly disordered systems, such as random sequences, are in one sense extremely complex rather than simple; their complexity is such that in the overwhelming number of cases they don’t submit to a description via some succinct and relatively simple mathematical scheme. There, is therefore, inconsonance in TB&O linking order to complexity and disorder to simplicity.

What may be confusing TB&O is the fact that highly disordered systems are statistically simple but not configurationally simple: Statistically speaking a very disordered system may be characterized with a few macroscopic parameters whose values derive from means and frequencies, whereas to pin a disordered system down to an exact configuration requires, in most cases, a lot complex of data. Where TB&O seem to fall down in chapter 7 is that they fail to make it clear that living structures are neither very simple nor highly complex, neither highly ordered nor highly disordered but are, in fact, configurations that lie somewhere in between the extremes of order and disorder, simplicity and complexity. As such the characterization of living things is neither amenable to simple mathematical description, nor statistical description.

That TB&O have misrepresented the order/disorder spectrum as a polarity between complexity and simplicity helps ease through their suggestion of an intuitive contradiction between evolution and the second law. Because they have identified high order with high complexity and because the second law implies a move away from order, then in the minds of TB&O it follows that the second law must also entail a move away from structural complexity, thus ruling out the development of the complex living structures as a product of thermodynamics. If TB&O were aware of the intermediate status of living structures in the configurational spectrum they may realize that the situation is more subtle than their mismanagement of the categories suggests.

Another nuance that doesn’t come out in chapter 7 of TB&O’s book is that disorder or entropy, as it is defined in physics, is a parameter that is not a good measure of the blend of complexity and organization we find in organic structures. In physics disorder is defined as the number of possible microscopic configurations consistent with a macroscopic condition: For example compare two macroscopic objects such as a crystal and an organism. A crystal on the atomic level is a highly organized structure and as such it follows that there are relative few ways the particles of a crystal can be rearranged and still give us a crystal. On the other hand a living structure has a far greater scope for structural variety than that of a crystal and it therefore follows that the number of ways of rearranging an organism’s particles without disrupting the abstract structure of the organism is much greater than crystal. Therefore, using the concept of disorder as it is defined in physics we conclude that an organism has a far greater disorder than a crystal. Given the middling disorder of organisms as measured by physics one might then be lead to conclude that in the slow run down of the universe from high order to low order the universe will naturally run through the intermediate disorder of organic forms. Clearly there is something wrong with TB&O’s presentation of the thermodynamic case against evolution. (To be fair I ought to acknowledge that ID theorist Trevors and, in this paper, does demonstrate an appreciation of the intermediate place occupied by the structures of life)

Consider also this quote from TB&O:

The second law of thermodynamics says that the entropy of the universe (or any isolated system therein) is increasing; i.e., the energy of the universe is becoming more uniformly distributed.

Precisely; when compared to the high concentration of energy in a star, the energy distribution in living structures is part of a more uniform distribution of the suns highly concentrated energy and therefore as far as physics is concerned it represents a more degraded form of energy. Admittedly, the thought that physics’ rather crude measure of disorder implies that living structures are an intermediate macrostate in the thermodynamic run down of the high concentration of energy found in a star is counter intuitive, but TB&O have so far failed to give a coherent statement of the thermodynamic problem in order to explain this seeming anomaly.

* * *
The second law of thermodynamics works because cosmic systems are walking randomly across the possible states open to them. Given this assumed random walk the overall trend, then, will be for those systems to most likely move toward categories of state with greater statistical weight. It follows, therefore, that classes of states which have a large statistical weight are, as time progresses, the most likely to be occupied by the system; for example heat is likely to move from a hotter region to a colder region because a uniform distribution of heat can be realized with a far greater number of microscopic arrangements than an heterogeneous distribution. A crucial feature of this physical model is the space of possible states. This is determined by the laws of physics which limit the number of possible states available. Through their sharply limiting the number of available states the laws of physics effectively impose an order on the cosmos. Because these laws transcend thermodynamics the order they impose on the cosmos is not subject to thermodynamic run down.


As I have indicated in this post, it is conceivable that the laws of physics are so restrictive, in fact, that they eliminate enough possible states to considerably enhance the relative statistical weight of living structures at certain junctures in the evolution of the cosmos. In effect these laws impose a state “bottle neck”, whereby in the run down to disorder the cosmic system is forced to diffuse through this bottle neck - a place where living macrostates effectively have a realistic probability of existence because their relative statistical weight is enhanced by the laws of physics. However, having said that I would certainly accept that ID theorists should challenge the existence of this state bottle-neck; it is by no means obvious that physics implies such a bottle-neck. But what I do not accept is the anti-evolutionist’s canard that the second law of thermodynamics in and of itself is sufficient to rule out a realistic probability of evolution. If the anti-evolution lobby wants to clinch their case they need to show that physics does not apply sufficient mathematical constraint on the space of possibilities to enhance the relative statistical weight of organic structures at certain stages in the run down to maximum disorder.

I agree with TB&O when they say:

There is another way to view entropy. The entropy of a system is a measure of the probability of a given arrangement of mass and energy within it. A statistical thermodynamic approach can be used to further quantify the system entropy. High entropy corresponds to high probability. As a random arrangement is highly probable, it would also be characterized by a large entropy. On the other hand, a highly ordered arrangement, being less probable, would represent a lower entropy configuration. The second law would tell us then that events which increase the entropy of the system require a change from more order to less order, or from less-random states to more-random states.

But the problem here is what TB&O leave unsaid. There is no quibble with the assertion that the second law of thermodynamics ensures a migration of an isolated system to its most probable class of state, as determined by statistical weightings. But, and this what TB&O don’t acknowledge, the details of that migration are sensitive to the constraints of physics and those constraints apply a transcendent order that is not subject to thermodynamic decay. These constraints may (or may not) considerably enhance, via a state bottle-neck, the probability of the formation of living structures as an intermediate state of disorder in the run down to the most probable state.

That TB&O fail to understand the essential issue is indicated from the following:

Clearly the emergence of order of any kind in an isolated system is not possible. The second law of thermodynamics says that an isolated system always moves in the direction of maximum entropy and, therefore, disorder.

The intent of this statement, presumably, is to dismiss the possibility of the evolution of life on the basis of the second law of thermodynamics. In TB&O’s minds their intuitive conclusion is safe because they have wrongly pushed living structures to the extreme ordered end of the order-disorder spectrum. They see no prospect of life arising because thermodynamic change is always away from order and in TB&O’s flawed opinion it must therefore always be away from the states of life.

Despite TB&O’s initial caution in pushing their belief it is clear that they think their conclusion is safe before they have demonstrated it:

Roger Caillois has recently drawn this conclusion in saying, "Clausius and Darwin cannot both be right."3 This prediction of classical thermodynamics has, however, merely set the stage for refined efforts to understand life's origin.

But to be fair to TB&O they don’t entirely dismiss evolution and they are prepared to consider those refined efforts to understand life’s origin. They engage in some uncontentious thermodynamic analysis showing that the sub systems in a system that is far from equilibrium may decrease in entropy. Hence the only hope that evolution has, they concede, is in the area of systems far from equilibrium. In this TB&O are right. Evolution, if it is to be found at all, will only be found in non-equilibrium systems; that is, systems that diffuse through a conjectured state bottle neck where the squeeze on the available states ensures that the relative statistical weight of living structures is enhanced, thus imbuing them with enhanced probability. But TB&O warn:

Nevertheless, one cannot simply dismiss the problem of the origin of organization and complexity in biological systems by a vague appeal to open-system non-equilibrium thermodynamics. The mechanisms responsible for the emergence and maintenance of coherent (organized) states must be defined.

On that point I certainly agree with TB&O: The appeal to non-equilibrium thermodynamics is vague and general and the mechanisms responsible for the emergence and maintenance of coherent (organized) states must be defined. In other words, is there any real evidence for a state bottle neck, a bottle neck that must be the general mechanism behind evolution? In my next installment I hope to see what TB&O’s chapter 8 has waiting for us on this important question.