In this series of posts I have been critiquing de facto IDists Nametti and Holloway's (N&H) concept of "Algorithmic Specified Information" (ASC) a quantity which they claim is conserved. The other parts of this series can be found here, here, here and here.
As we have seen ASC does not provide robust evidence of the presence of intelligent activity; its conservation is also questionable. The underlying motive for N&H's work probably flows out of their notion that "Intelligence" is fundamentally mysterious and cannot be rendered in algorithmic terms no matter how complex those terms are. Also the de facto ID community posit a sharp dichotomy between intelligent activity and what they dismiss as "natural forces". They believe that the conservation of what they identity as "Information" can only be violated by the action of intelligent agents, agents capable of creating otherwise conserved "information" from nothing.
.
For me, as a Christian, de facto ID's outlook is contrary to many of my own views that readily flow out of my understanding that those so-called "natural forces" are the result of the creative action of an immanent Deity and will therefore reveal something of God's nature in their power to generate difference and pattern; after all human beings are themselves natural objects with the ability to create pattern and configuration via artistic and mathematical endeavour.
Although I would likely see eye-to-eye with de facto IDists that there is no such thing as something coming from nothing (a notion that is the proposal of some atheists), for me the very glory of creation is that it generates otherwise unknown & unforeseen patterns and configurations; as Sir John Polkinghorne puts it our cosmos is a "fruitful creation": Whilst for the Divine mind there may be nothing new under the sun, for those under the sun the unfolding of creation is a revelation: So whilst it's true that the realisation that something has to come from something prompts us to probe for a formal expression of the conservation of something, on the other hand that creation creates pattern and that humans learn from the revelation of this creation suggests we also probe for a formal expression of our intuition that information is created. De facto IDists behave like crypto-gnostics unable to acknowledge the sacredness of creation (albeit corrupted by Satan and Sin).
All in all I find the IDists obsession with that of trying to prove an all embracing theorem of information conservation as misguided and futile as the atheist project to show how something can come from nothing.
***
1. Evolution and teleology
In this part 5 I want to look at atheist Joe Feslenstein's reaction to N&H's efforts. In fact in this post Joe Felsenstein criticizes the concept of ASC on the basis that it simply doesn't connect with the essential idea of evolution: that is, the selection of organic configurations based on fitness:
FELSENSTEIN: In natural selection, a population of individuals of different genotypes survives and reproduces, and those individuals with higher fitness are proportionately more likely to survive and reproduce. It is not a matter of applying some arbitrary function to a single genotype, but of using the fitnesses of more than one genotype to choose among the results of changes of genotype. Thus modeling biological evolution by functions applied to individual genotypes is a totally inadequate way of describing evolution. And it is fitness, not simplicity of description or complexity of description, that is critical. Natural selection cannot work in a population that always contains only one individual. To model the effect of natural selection, one must have genetic variation in a population of more than one individual.
Yes, the resultant configurations of evolution are about population fitness (or at least its softer variant of viability). The sum of a configuration's Shannon improbability minus the relative algorithmic complexity is an insufficient condition for this all important product of evolution.
Fitness (and its softer variant of viability) is a concept which humanly speaking is readily conceived and articulated in teleological terms as the "goal", or at least as the end product of evolution; that is, we often hear about evolution "seeking" efficient survival solutions. But of course atheists, for whom teleological explanations are assumed to be alien to the natural world, will likely claim that this teleological sounding talk is really only a conceptual convenience and not a natural teleology. For them this teleology is no more significant than cause-and-effect-Newtonianism being thought of in terms of those mathematically equivalent "final cause" action principles. As is known among theoretical physicists there is no logical need to think of Newtonian mechanics in terms of final causation: Algorithmically speaking "final cause" action principles are just a nice way of thinking about what are in fact algorithmically procedural Newtonian processes, processes driven from behind. Pre-causation rather than post-causation rules here.
However, although Felsenstein the atheist will likely acknowledge there is no actual teleology in evolution he nonetheless accepts that thinking about evolution in terms of its end result of fitness makes the whole process (pseudo) meaningful: Felsenstein points out that the earlier ID concept of Complex Specified information (CSI) is intrinsically more meaningful than ASC and notes that CSI was explicitly stated by William Dembski in terms of end results: Viz:
However, although Felsenstein the atheist will likely acknowledge there is no actual teleology in evolution he nonetheless accepts that thinking about evolution in terms of its end result of fitness makes the whole process (pseudo) meaningful: Felsenstein points out that the earlier ID concept of Complex Specified information (CSI) is intrinsically more meaningful than ASC and notes that CSI was explicitly stated by William Dembski in terms of end results: Viz:
In the case where we do not use ASC, but use Complex Specified Information, the Specified Information (SI) quantity is intrinsically meaningful. Whether or not it is conserved, at least the relevance of the quantity is easy to establish. In William Dembski's original argument that the presence of CSI indicates Design (2002) the specification is defined on a scale that is basically fitness. Dembski (2002, p. 148) notes that:
"The specification of organisms can be cashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of the minimal function of biochemical systems. Darwinist Richard Dawkins cashes out biological specification in terms of the reproduction of genes. Thus in The Blind Watchmaker Dawkins writes "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is ... the ability to propagate genes in reproduction."
The scale on which SI is defined is basically either a scale of fitnesses of genotypes, or a closely-related one that is a component of fitness such as viability. It is a quantity that may or may not be difficult to increase, but there is little doubt that increasing it is desirable, and that genotypes that have higher values on those specification scales will make a larger contribution to the gene pool of the next generation.
Herein we find the irony: Both Felsenstein and Dembski note that Specified Complex Information is only meaningful in terms of a conceived end result: e..g fitness, viability, minimal function, gene propagation or whatever. But of course what appears to be teleology here would simply be regarded by a true-blue atheist as an elegant intellectual trick of no more teleological significance than the action principles of physics.
2. Evolution and the spongeam
It is relatively easy to determine whether a given organism has viability; that is, whether it is capable of self-maintenance and self replication; just watch it work! But the reverse is much more difficult: From the general requirements of self maintenance and self-replication it is far from easy to arrive at detailed structures that fulfill these requirements: This is where evolution is supposed to the "solve" the computational problem: It is a process of "seek and find"; a "find" registers as successful if a generated configuration is capable of self-maintenance and self replication given its environment. In evolution the self-maintaining and self-replicating configurations are, of course, self-selecting. That's the theory anyway. Strictly speaking, of course, talk about evolution "solving" a computational problem is not going to go down well with the anti-teleologists because the activity of "solving" connotes an anthropomorphic activity where a problem has been framed in advance and a "solution" sought for; computation in most cases is a goal motivated process where the goals of problem solving are its raison d'etre. But to a fully fledged atheist evolution has no goals - evolution just happens because of the cosmos's inbuilt imperative logic, a logic implicit from the start, a logic where evolutionary outcomes are just incidental to that logic. If we believe evolution to be driven by causation from behind, then evolutionary outcomes will be implicit in the initial conditions (at least probabilistically). It has to be assumed that these outcomes will at least have a realistic probability of coming about given the size & age of the universe and the causation laws that constrain the universe. To this end I have in various papers and posts caricatured evolutionary processes as the exponential diffusion of a population of structures across configuration space. Each reproductive step is constrained by the conditions of self-maintenance and self-replication: But this process will only work if a structure I call the spongeam pervades configuration space. I'm not here going to air my doubts about the presence of this structure but simply note that conventional evolution requires the spongeam to be implicit in the laws of physics. The spongeam is in effect the depository of the up-front-information which is a prerequisite of evolution and also OOL. More about the spongeam can be found in these references......
Although I think reasonable atheists would accept that evolution requires a burden of up-front-information there may still be resistance to this idea because it then raises the question of "Where from?". Like IDists who are determined to peddle the notion of information conservation some atheists are constantly drawn toward the "something from nothing" paradigm. See for example the discussion I published in the comments section of the post here where an atheist just couldn't accept that the natural physical regime must be so constrained that evolution effectively has direction. He may have been of the "naked chance" persuasion.
3. Generating complexity
As we have seen in this series, random configurations are identified by the length of the algorithm needed to define them: Viz: If the defining algorithm needs to be of a length equal to the length of the configuration then that configuration is identified as random. However, that a random configuration can only be defined by an algorithm of the same length doesn't mean to say that the random configuration cannot be generated by algorithms shorter than the length of the random configuration: After all, a simple algorithm that systematically generates configurations, like say counting algorithms, will, if given enough time, generate any finite random sequence. But as I show in my book on Disorder and Randomness small space parallel processing algorithms will only generate randomness by consuming huge amounts of time. So basically generating randomness from simple beginnings takes a very long time (actually, by definition!). In this sense randomness has a high computational complexity.
Although complex organised entities like biological structures obviously do not classify as random configurations they do have properties in common with randomness in that they show a great variety of sub-configurational patterns and are potentially a huge class of possible configuration (but obviously a lot smaller than the class of random configurations). Therefore it is very likely that such complex configurations as living structures, being somewhere between high order and high disorder in complexity, are themselves going to take a long time to generate from algorithmic simplicity, much longer time, I'll wager, than the age of the universe even given the constraining effect of the laws of physics. This would mean that to generate life in a cause and effect cosmos and in a timely way sufficient up front information (such as the spongeam) must be built into the cause and effect algorithms from the outset. The cause an effect paradigm, even if used probabilistically, requires that the outcomes of a process are implicit (if only probabilistically) in the current state of the procedure. But if there is insufficient up-front-information built into a cause and effect system to drive the generation of life in a short enough time how could it be done otherwise? If we imagine that there was no spongeam could life still be arrived at? I believe there are other possibilities and ideas to explore here.
4. Declarative languages and computation
In conventional evolution (and OOL) the potential for life is thought to be built into a physical regime, a regime driven from behind in a cause and effect way. But if those cause and effect laws are simple parallel imperative algorithms and provide insufficient constraint (i.e. insufficient up front information) life can then only be developed by consuming considerable time and space resources, more time and space, in fact, than the known cosmos provides. So, as I have proposed in my Thinknet and Meloncolia I projects one way of solving the generation time problem is to use expanding parallelism. But for this technique to work there is, however, another important ingredient needed here. This is the declarative ingredient which means that what is generated is subject to selection and therefore in a declarative context the algorithms are explicitly teleological and driven by a goal seeking intentionality.
Most procedural programs are actually implicitly teleological in that they are written in an imperative language with the aim of producing some useful output; that is, an end product. But in a true declarative program the procedures aren't written down but rather the declarative language itself is used to state the desired end product in a logical and mathematical way and the compiler translates this formal statement into procedures. A practical example of a simple declarative language would be as follows:
5. Example of a declarative language
Our problem might be this: Is there a number that has the following set of properties: Its digits add up to a specified number N1, it is exactly divisible by another specified number N2 and it is also a palindrome. This is a declaration of intention to find if such a number or numbers exist. One way to conceive the solution of this problem is to imagine all the natural numbers actually exist, at least platonically as locations in some huge imaginary "memory" that can be flagged by a signalling processes; in this sense the availability of memory space is assumed not to be an issue. In fact we imagine we have three processes at work; Viz: Process A which systematically flags numbers whose digits add up to N1, process B which systematically flags numbers which are multiples of N2 and finally process C which systematically flags palindromes. If all these three processes flag the same number then we will have a solution to the stated problem (there may be more than one solution of course). Hence a solution, or a range of solutions, can then be selected as the goal of the cluster of processes.
A method of this sort could be employed for other mathematical properties of numbers. Hence we could extend the tool box of flagging processes indefinitely, A, B, C, D, E....etc. Each of these labels represents a process which generates a particular kind of number. So we could then make declarations of intent Thinknet style such as:
[A B C D]
1.0
This expression represents a simple declarative program for finding numbers with all the properties flagged by A, B , C and D. This search is a two stage operation: Firstly the configuration A B C D represents the operation of forming halos of numbers flagged with their respective properties. The square brackets [ ] represents the operation of making a selection of those numbers which simultaneously have all the properties that the processes A, B, C and D flag.
In analogy to Thinknet we could further sophisticate this simple language by allowing nesting; that is:
[[A B] C D]
2.0
...where the nest [A B] is solved first by selecting a set of solutions which have properties flagged by both A and B. Processes C and D then run, flagging their respective properties. The outer brackets are eventually applied to make a selection of numbers which simultaneously have all the properties flagged by A, B, C and D. It is likely that different bracketing scenarios will come up with different results. Hence in general:
[[A B] C D] != [A B [C D]]
3.0
No doubt there is a lot of scope for sophisticating this simple declarative language further - for example we could have processes which seek numbers which do not have certain declared properties; these are what you might call negation processes: Hence we might have:
[A !B]
4.0
...where !B means numbers that don't have the property flagged by the process B. Other logical operators could no doubt be added to this language.
Clearly the details of the processes A B C etc. are procedural; that is, they must be programmed in advance using a procedural cause & effect language. Nevertheless, the foregoing provides a simple example of a declarative language similar to Thinknet where once the procedural work has been done setting up A, B, C... etc, the language can then be used to construct declarative programs. However, simple though this language is, practically implementing it is beyond current computing technology: Firstly it is assumed that working memory space isn't limited. Secondly, the processes A, B, C,.... and [ ] would be required to operate with huge halos of possible candidate numbers; this latter requirement really demands that the seek, flag and selection processes have available to them an expanding parallelism rather than the limited parallel computation of our current computing technology.
In and of themselves the flagging procedures designated A.B, C etc. do not appear to obviously manifest any goal driven behaviour: it is only when A, B, C etc are brought together with [ ] that the teleology of the system emerges. But having said that we must realise that if it were possible to implement the above declarative language on a computer it would in fact be only a simulation of declarative mechanics. For in a practical computer not only would the procedural algorithms defining the A, B and C reside as information in the computer memory from the start, but so also would the code needed to implement the selection process [ ]. Thus a human implementation of declarative computing has to be simulated with cause and effect software. The usual computational complexity theorems would therefore still apply to these strings of code. But although in human designed computers the seek and select program strings are found in computer memory from the outset, this doesn't apply in nature. After all, we don't actually observe the laws of physics themselves; all we observe is their apparent controlling and organizing affects on our observations. Thus physical laws are more in the character of a transcendent reality controlling the flow of events. Likewise if there is such as thing as teleological selection criteria in our cosmos then they too would, I guess, be a transcendent reality and only observed in their effects on the material world. Nature as far as we can tell doesn't have a place where we can go to see its stored generating programs doing the seeking and selecting. But when we do conventional computing we can only simulate transcendent generation and selection laws with explicit program strings residing observably in memory.
6. Creating/generating Information?
I've schematically represented a declarative computation as:
[A B C] => Output
5.0
...where A, B, C, are cause and effect processes which (perhaps using expanding parallelism) flag configurational objects with particular properties. The square brackets represent the operation of making a selection of configurations which simultaneously possess the sought for properties flagged by A, B and C. This selection is designated by "Output".
The computational complexity of the computation represented by 5.0 is measured by two resources:
a) If the above operation were to be rendered in conventional computation there would be programs strings for A, B, C and [ ] which when executed would simulate the generation and selection of configurations. The length of that string would be one aspect of the complexity of the computation.
b) The second measure is the count of operations required to reach the output. If we are simulating the declarative computation using parallel processing then linear time would be one way to measure the complexity, but if we are using expanding parallelism it is better to measure the complexity in terms of the total count of the number of parallel operations.
In light of this paradigm is it right to make claims about either information being conserved and/or information being created? As we will see below the answer to that question is 'yes' and 'no' depending on what perspective one takes.
Firstly let us note that 'information' is, connotatively speaking, a bad term because it suggests something static like a filling cabinet full of documents. In contrast we can see that expression 5.0 is in actual fact a representation of both static information and computational activity. If we are going to perceive 'information' in purely static configurational terms then 5.0 clearly creates information in the sense that it creates configurations and then flags and selects them; the teleological rationale being that of finding, flagging and selecting otherwise unknown configurations which are solutions to the stated problem. So, secondly we note that the creation of the configurations which are solutions to the stated problem cannot take place without computational activity.
Spurious ideas about information somehow being conserved may well originate in the practical problem of the storage of configurational information where storage space is limited: Algorithmic information theory is often concerned with how to map a large string to a shorter string, that is, how to compress the string for the purpose of convenient storage without loosing any information. Here algorithmic information theory reveals some kind of conservation law in that clearly the information in a string has a limit beyond which it can not be compressed further without loosing information. In fact as is well known a truly random configuration can not be compressed at all without losing information. In this "finite filing cabinet" view of information, a view which deals with configuration, (as opposed to activity) we find that the minimum length a string can be compressed without loss of information is fixed; in that sense we have a conservation law.
But when we are talking about the generation and selection of configurations, configurational information isn't conserved; in fact the intention is to create and flag otherwise unknown configurations. Moreover, we are not talking here about an informational mapping relationship because this activity need not be one that halts but just continues generating more and more complex configurations that fulfill the criteria A, B, C, ...etc. Nevertheless, it may still be possible that some kind of conservation law could be formally constructed if we mathematically bundle together the program strings needed for A, B, C and [ ] along with the quantity of activity needed to arrive at a selection. Hence we might be able to formalise a conservation of information along the lines of something like:
Program strings + activity = output.
6.0
...that is the computation of the required output is the "sum" of the initial static information and the amount of activity needed to arrive at a result; here initial information and computational activity have a complementary relation. Of course, at this stage expression 6.0 is not a rigorously formal relationship but really represents the kind of relationship we might look for if the matter was pursued.
As I noted toward the end of this paper in my Thinknet project the declarative paradigm symbolised by 5.0 provides a very natural way of thinking about "specified complexity". As we have seen the word 'specified' connotes some kind of demand or requirement conceived in advance; hence the word 'specified' has a teleological content, a goal aimed for. 'Complexity' on the other hand will be a measure of both the initial static information and computational activity required to meet the stated goal. These two aspects are more formally expressed in relationship 5.0 where the specifications are represented by properties flagged by processes A, B and C and the complexity of the computation is measured by 6.0.
I would like to moot the idea that expression 5.0, where its form is understood to apply in an abstract and general way, is an important aspect of intelligent activity. This is of course by no means a sufficient condition of the dynamics of intelligence and in fact only represents one aspect, albeit a necessary condition, of intelligence; namely that of goal seeking.
7. The poverty of de facto ID.
The de facto ID movement comes with strong vested interests. Firstly there are political interests: North American de facto ID finds itself leaning into the arms the right-wing, although to be fair this may in part be a reaction against some of the left slanting atheism of the academic establishment. Secondly there are intellectual interests. As we have seen de facto ID is all but irreversibly committed to a paradigm of intelligence that is beyond human investigation in so far as they have mooted the concept that intelligence is some kind of oracular magic that cannot be simulated in algorithmic terms. This has naturally fitted in with their dualistic outlook which sets intelligent agency over and against the "natural forces" of nature, forces which are thought of by them to be too profane, inferior and "material" to be at the root of a putatively "immaterial" mind with power to create information in a mysterious magical way. This almost gnostic outlook neglects all the potentiality implied by that fact that nature (which includes ourselves) is an immanent God's creation.
What may not have helped the de facto IDists is that current scientific attitudes are slanted almost exclusively in favour of a cause and effect view of the physical regime, a regime where there is no room for "final causes" i.e teleology. In the cause and effect paradigm the future is seen to be exclusively written into past events (at least probabilistically) and little or no credence is given to the possibility that there may be transcendent selection effects waiting invisibly in the wings.
As we have seen, without explicitly referring to the dynamics of end results and/or intentionality it is very difficult to define "specified complexity". This has in fact hamstrung the de facto IDists attempts to define it themselves. This has lead N&H to define "specified complexity" in terms of static configurations alone thus neglecting the important dynamic aspect of information generation. According to N&H a configuration is judged to have Algorithmic Specified Complexity (ASC) if it has a high improbability but with a low relative algorithmic complexity. Once again they've non-noncommittally hedged on the concept of intelligence by placing it beyond analysis into the hidden libraries of relative algorithmic complexity. The result is a definition of specified complexity that is full of loopholes as we have seen: At best it can identify complex organisation. But as we have also seen this doesn't always work and moreover ASC isn't as strongly conserved as they claim it is; it is poor definition that is far from robust and gets nowhere near identifying the presence of intentionality. Their problem traces back to the fact that they are trying to identify the operation of intelligence without cognisance of the dynamics of intentionality and goal seeking.
The intrinsic configurational properties of an object such as an unexpected degree of ordered complexity are not reliable predictors as to the operation of intentionality; they may be just incidental to the cause and effect processes. When we look at objects of archaeological interest, whether they be complex or simple, we look for evidence of intentionality. But we can only do that because we are human and have some inkling of the kind of things humans do and the goals that motivate them. In archaeological work, as in police work, attempting to identify the presence of purpose, (that is identifying an underlying teleology) is a feature of that work. It is ironic that the atheist Joe Felsenstein should spot the inadequacy of N&H's definition to cope with even the pseudo-declarative nature of standard evolution.
The fact is N&H haven't really grasped the concept of specified in information in terms of a dynamic declarative paradigm and therefore have failed to come up with a useful understanding of the intelligent activity. Given that their vision goes no further than procedural algorithmics and configurational compressibility (connections where the information is in one sense present from the start) it is no surprise that they think that information, which they perceive as a very static object, is conserved. In contrast the whole rationale of intelligent action is that it is a non halting process forever seeking new configurations & forms and thereby creating them. Intelligent output is nothing if creative. This is what I call intelligent creation.
***
Summing up
As we have seen even an atheist like Joe Felsenstein is tempted to accept that "specified information" makes little sense outside of a teleological context and that is why evolution is conveniently conceived in pseudo-teleological terms - that is, in terms of its end result - when in fact with evolution, as currently understood, all the cause and effect information must be built in from the outset. Of course, for a true-blue atheist any teleological rendition of evolution is at best a mental convenience and at worst a crass anthropomorphism. For myself, however, I have doubts that even given the (procedural) laws of physics there is sufficient built-in information to get to where we are now biologically with a realistic probability after a mere few billion years of the visible universe. Hence, I'm drawn toward the heretical idea that both expanding parallelism and that transcendent seek and selection criteria are somehow built into nature.
Do these these notions of the teleology of declarative problem solving help to fill out the details of the mechanisms behind natural history? If we understand "evolution" in the weaker sense of simply being a history of macro-changes in phenotypes and genotypes, then what goals are being pursued and what selection criteria are being used apart from viability of form? How are the selection criteria being applied? What role, if any, does quantum mechanics have given that it looks suspiciously like a declarative process which uses expanding parallelism and selection? l'll just have to wait on further insight; if it comes.
No comments:
Post a Comment