Pages

Tuesday, January 12, 2016

Melencolia I Part 7: Creating Information II

Part 7 of Melencolia I can be downloaded from here.  I reproduce the introduction to this paper below:

 Introduction

The de facto Intelligent Design community make claim to the notion that information is conserved, at least as far as so called “natural processes” are concerned. In particular, one of the founding gurus of the movement, William Dembski, has stated the conservation of information in mathematical terms. The aim of this paper is to investigate this claim and its limitations.

What is information? There is more than one valid answer to that question, but this paper will be looking at the kind of information defined as the negative of the logarithm of a probability p; that is, -log p .  This is the definition of self-information used by William Dembski. The obvious corollary here is that self-information increases with increasing improbability. The rationale behind this definition is that the lower the probability of an event then the more unexpected it is and therefore the more informative it becomes if it should occur; information is data you don’t expect, don’t anticipate and don’t know to be true; that is, you learn from it when it manifests itself to you and makes itself known.

As an example, consider a configurational segment of length n taken from a series of coin tosses. Given n it follows that the number of possible configurations of heads and tails is 2n.  If we assume that each of these possible configurations is equally probable then any single configuration will have a probability of 2–n. For large n this is going to be a very small value.  In this case our knowledge of which configuration the coin tossing will generate is at minimum; all possibilities from amongst a huge class of possibilities are equally likely. Consequently, when the coin tossing takes place and we learn which configuration has appeared it is highly informative because it is just one amongst 2n equally likely possibilities. The measure of just how informative that configuration is, is quantified by I, where:

I = -log 2-n = n log 2
(0.0)
Conveniently this means that the length of a sequence of coin tosses is proportional to the amount of information it contains.

Information, as the very term implies, is bound up with observer knowledge:  When calculating probabilities William Dembski uses the principle of indifference across mutually excluding outcomes; that is, when there is no information available which leads us to think one outcome is more likely than another then we posit a priori that the probabilities of the possible outcomes are equal. I’m inclined to follow Dembski in this practice because I hold the view that probability is a measure of observer information about ratios of possibilities. In my paper on probability I defined probability recursively as follows:


Probability of case C   =  Sum of the probabilities of cases favoring C /  Sum of probabilities of all cases

(0.1)

This definition is deliberately circular or, if you want the technical term, recursive. For a recursively defined probability evaluating it depends on the recursion terminating at some point. Termination will, however, come about if we can reach a point where the principle of indifference applies and all the cases are equally probable; when this is true the unknown cancels out on the right hand side of (0.1) and the probability on the left hand side can then be calculated.  From this calculation it is clear that probability is a measure of human information about a system in terms of the ratio of possibilities open to it given the state of human knowledge of the system. This means probabilities are ultimately evaluated a priori in as much as they trace back to an evaluation of human knowledge about a system; the system in and of itself doesn’t possess those probabilities.

It follows then that once an improbable event has occurred and is known to have occurred, the self-information of that event is lost because it no longer has the power to inform the observer; the probability of a known event is unity and therefore of zero information. But for an observer who has yet to learn of the event, whether the event has actually happened of not, the information is still “out there”. Information content, then, is observer relative.

Observer relativity means that self-information is not an intrinsic property of a physical system but rather an extrinsic property. That is, it is a property that comes about through the relation the system has with an observer and that relation is to do with how much the observer knows about the system. Therefore a physical system loses its information as the observer learns about it, and yet at the same time there is no physical change in that system as it loses this information; where then has the information gone? Does it now reside in the observers head?  But for another observer who is still learning about the system that information, apparently, remains “out there, in the system”.

Given the relativity of self-information, then treating it as if it were an intrinsic property of a physical system can be misleading. The observer relativity of self-information makes it a very slippery concept: As an extrinsic property that relates observers to the observed, a system can at once both possess information and yet at the same time not possess it!

This observer information based concept of probability is very relevant to the subject in hand: that is, to the cosmic generation of life. Given that the configurations of life fall in the complex ordered category it follows that as a class life is a very rare case relative to the space of all possible configurations. So, assuming the principle of indifference over the total class of possible configurations we would not expect living configurations to exist; this is because their a priori probability must be extremely small and therefore their self-information or surprisal value is very high: Living configurations are very special and surprising configurations.

But the fact is life does exist and therefore the a posteriori probability of instantiated life is unity. It is this intuitive contradiction between the a priori probability and the a posteriori probability of life that constitutes one of the biggest of all scientific enigmas.  In this paper I will attempt to disentangle some of the knots that the use of self-information introduces when an attempt is made to use it to formulate a conservation law. I also hope to throw some light on the a priori/a posteriori enigma of life.

No comments:

Post a Comment