Monday, June 15, 2020

Breaking Through the Information Barrier in Natural History Part 5

(This post is still undergoing correction and enhancement)


In this series of posts I have been critiquing de facto IDists Nametti and Holloway's (N&H) concept of "Algorithmic Specified Information" (ASC) a quantity which they claim is conserved. The other parts of this series can be found here, here, here and here

As we have seen ASC does not provide robust evidence of the presence of intelligent activity; its conservation is also questionable. The underlying motive for N&H's work probably flows out of their notion that "Intelligence" is fundamentally mysterious and cannot be rendered in algorithmic terms no matter how complex those terms are. Also the de facto ID community posit a sharp dichotomy between intelligent activity and what they dismiss as "natural forces". They believe that the conservation of what they identity as "Information" can only be violated by the action of intelligent agents, agents capable of creating otherwise conserved "information" from nothing. 
.
For me, as a Christian, de facto ID's outlook is contrary to many of my own views that readily flow out of my understanding that those so-called "natural forces" are the result of the creative action of an immanent Deity and will therefore reveal something of God's nature in their power to generate difference and pattern; after all human beings are themselves natural objects with the ability to create pattern and configuration via artistic and mathematical endeavour. 

Although I would likely see eye-to-eye with de facto IDists that there is no such thing as something coming from nothing (a notion that is the proposal of some atheists), for me the very glory of creation is that it generates otherwise unknown & unforeseen patterns and configurations; as Sir John Polkinghorne puts it our cosmos is a "fruitful creation": Whilst for the Divine mind there may be nothing new under the sun, for those under the sun the unfolding of creation is a revelation: So whilst it's true that the realisation that something has to come from something prompts us to probe for a formal expression of the conservation of something, on the other hand that creation creates pattern and that humans learn from the revelation of this creation suggests we also probe for a formal expression of our intuition that information is created. De facto IDists behave like crypto-gnostics unable to acknowledge the sacredness of creation (albeit corrupted by Satan and Sin).

All in all I find the IDists obsession with that of trying to prove an all embracing theorem of information conservation as misguided and futile as the atheist project to show how something can come from nothing. 

***


1. Evolution and teleology
In this part 5 I want to look at atheist Joe Feslenstein's reaction to N&H's efforts. In fact in this post Joe Felsenstein criticizes the concept of ASC on the basis that it simply doesn't connect with the essential idea of evolution: that is, the selection of organic configurations based on fitness:

FELSENSTEIN: In natural selection, a population of individuals of different genotypes survives and reproduces, and those individuals with higher fitness are proportionately more likely to survive and reproduce. It is not a matter of applying some arbitrary function to a single genotype, but of using the fitnesses of more than one genotype to choose among the results of changes of genotype. Thus modeling biological evolution by functions applied to individual genotypes is a totally inadequate way of describing evolution. And it is fitness, not simplicity of description or complexity of description, that is critical. Natural selection cannot work in a population that always contains only one individual. To model the effect of natural selection, one must have genetic variation in a population of more than one individual.

Yes, the resultant configurations of evolution are about population fitness (or at least its softer variant of viability). The sum of a configuration's Shannon improbability minus the relative algorithmic complexity is an insufficient condition for this all important product of evolution. 

Fitness (and its softer variant of viability) is a concept which humanly speaking is readily conceived and articulated  in  teleological terms as the "goal", or at least as the end product of evolution; that is, we often hear about evolution "seeking" efficient survival solutions. But of course atheists, for whom teleological explanations are assumed to be alien to the natural world, will likely claim that this teleological sounding talk is really only a conceptual convenience and not a natural teleology. For them this teleology is no more significant than cause-and-effect-Newtonianism being thought of in terms of those mathematically equivalent "final cause" action principles. As is known among theoretical physicists there is no logical need to think of Newtonian mechanics in terms of final causation: Algorithmically  speaking "final cause" action principles are just a nice way of thinking about what are in fact algorithmically procedural Newtonian processes, processes driven from behind. Pre-causation rather than post-causation rules here. 

However, although Felsenstein the atheist will likely acknowledge there is no actual teleology in evolution he nonetheless accepts that thinking about evolution in terms of its end result of fitness makes the whole process (pseudo) meaningful:  Felsenstein points out that the earlier ID concept of Complex Specified information (CSI) is intrinsically more meaningful than ASC and notes that CSI was explicitly stated by William Dembski in terms of end results: Viz:

In the case where we do not use ASC, but use Complex Specified Information, the Specified Information (SI) quantity is intrinsically meaningful. Whether or not it is conserved, at least the relevance of the quantity is easy to establish. In William Dembski's original argument that the presence of CSI indicates Design (2002) the specification is defined on a scale that is basically fitness. Dembski (2002, p. 148) notes that:

"The specification of organisms can be cashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of the minimal function of biochemical systems. Darwinist Richard Dawkins cashes out biological specification in terms of the reproduction of genes. Thus in The Blind Watchmaker Dawkins writes "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is ... the ability to propagate genes in reproduction."

The scale on which SI is defined is basically either a scale of fitnesses of genotypes, or a closely-related one that is a component of fitness such as viability. It is a quantity that may or may not be difficult to increase, but there is little doubt that increasing it is desirable, and that genotypes that have higher values on those specification scales will make a larger contribution to the gene pool of the next generation.

Herein we find the irony: Both Felsenstein and Dembski note that Specified Complex Information is only meaningful in terms of a conceived end result: e..g fitness, viability, minimal function, gene propagation or whatever. But of course what appears to be teleology here would simply be regarded by a true-blue atheist as an elegant intellectual trick of no more teleological significance than the action principles of physics. 


2. Evolution and the spongeam
It is relatively easy to determine whether a given organism has viability; that is, whether it is capable of self-maintenance and self replication; just watch it work! But the reverse is much more difficult: From the general requirements of self maintenance and self-replication it is far from easy to arrive at detailed structures that fulfill these requirements: This is where evolution is supposed to the "solve" the computational problem: It is a process of "seek and find"; a "find" registers as successful if a generated configuration is capable of self-maintenance and self replication given its environment. In evolution the self-maintaining and self-replicating configurations are, of course, self-selecting. That's the theory anyway. 

Strictly speaking, of course,  talk about evolution "solving" a computational problem is not going to go down well with the anti-teleologists because the activity of "solving" connotes an anthropomorphic activity where a problem has been framed in advance and a "solution" sought for; computation in most cases is a goal motivated process where the goals of problem solving are its raison d'etre. But to a fully fledged atheist evolution has no goals - evolution just happens because of the cosmos's inbuilt imperative logic, a logic implicit from the start, a logic where evolutionary outcomes are just incidental to that logic. If we believe evolution to be driven by causation from behind, then evolutionary outcomes will be implicit in the initial conditions (at least probabilistically). It has to be assumed that these outcomes will at least have a realistic probability of coming about given the size & age of the universe and the causation laws that constrain the universe. To this end I have in various papers and posts caricatured evolutionary processes as the exponential diffusion of a population of structures across configuration space. Each reproductive step is constrained by the conditions of self-maintenance and self-replication: But this process will only work if a structure I call the spongeam pervades configuration space. I'm not here going to air my doubts about the presence of this structure but simply note that conventional evolution requires the spongeam to be implicit in the laws of physics. The spongeam is in effect the depository of the up-front-information which is a prerequisite of evolution and also OOL. More about  the spongeam can be found in these references......


Although I think reasonable atheists would accept that evolution requires a burden of up-front-information there may still be resistance to this idea because it then raises the question of "Where from?". Like IDists who are determined to peddle the notion of information conservation some atheists are constantly drawn toward the "something from nothing" paradigm.  See for example the discussion I published in the comments section of the post here where an atheist just couldn't accept that the natural physical regime must be so constrained that evolution effectively has direction. He may have been of the "naked chance" persuasion.   

3. Generating complexity
As we have seen in this series, random configurations are identified by the length of the algorithm needed to define them: Viz: If the defining algorithm needs to be of a length equal to the length of the configuration then that configuration is identified as random. However, that a random configuration can only be defined by an algorithm of the same length doesn't mean to say that the random configuration cannot be generated by algorithms shorter than the length of the random configuration: After all, a simple algorithm that systematically generates configurations, like say counting algorithms, will, if given enough time, generate any finite random sequence. But as I show in my book on Disorder and Randomness small space parallel processing algorithms will only generate randomness by consuming huge amounts of time. So basically generating randomness from simple beginnings takes a very long time (actually, by definition!). In this sense randomness has a high computational complexity. 

Although complex organised entities like biological structures obviously do not classify as random configurations they do have properties in common with randomness in that they show a great variety of  sub-configurational patterns and are potentially a huge class of possible configuration (but obviously a lot smaller than the class of random configurations). Therefore it is very likely that such complex configurations as living structures, being somewhere between high order and high disorder in complexity, are themselves going to take a long time to generate from algorithmic simplicity, much longer time, I'll wager, than the age of the universe even given the constraining effect of the laws of physics. This would mean that to generate life in a cause and effect cosmos and in a timely way sufficient up front information (such as the spongeam) must be built into the cause and effect algorithms from the outset. The cause an effect paradigm, even if used probabilistically, requires that the outcomes of a process are implicit (if only probabilistically) in the current state of the procedure. But if there is insufficient up-front-information built into a cause and effect system to drive the generation of life in a short enough time how could it be done otherwise?  If we imagine that there was no spongeam could life still be arrived at?  I believe there are other possibilities and ideas to explore here.

4. Declarative languages and computation
In conventional evolution (and OOL) the potential for life is thought to be built into a physical regime, a regime driven from behind in a cause and effect way. But if those cause and effect laws are simple parallel imperative algorithms and provide insufficient constraint (i.e. insufficient up front information) life can then only be developed by consuming considerable time and space resources, more time and space, in fact, than the known cosmos provides. So, as I have proposed in my Thinknet and Meloncolia I projects one way of solving the generation time problem is to use expanding parallelism. But for this technique to work there is, however, another important ingredient needed here. This is the declarative ingredient which means that what is generated is subject to selection and therefore in a declarative context the algorithms are explicitly teleological and driven by a goal seeking intentionality.

Most procedural programs are actually implicitly teleological in that they are written in an imperative language with the aim of producing some useful output; that is, an end product. But in a true declarative program the procedures aren't written down but rather the declarative language itself is used to state the desired end product in a logical and mathematical way and the compiler translates this formal statement into procedures. A practical example of a simple declarative language would be as follows: 


5. Example of a declarative language
Our problem might be this: Is there a number that has the following set of properties: Its digits add up to a specified number N1, it is exactly divisible by another specified number N2 and it is also a palindrome. This is a declaration of intention to find if such a number or numbers exist. One way to conceive the solution of this problem is to imagine all the natural numbers actually exist, at least platonically as locations in some huge imaginary "memory" that can be flagged by a signalling processes; in this sense the availability of memory space is assumed not to be an issue. In fact we imagine we have three processes at work; Viz: Process A which systematically flags numbers whose digits add up to N1, process B which systematically flags numbers which are multiples of N2 and finally process C which systematically flags palindromes. If all these three processes flag the same number then we will have a solution to the stated problem (there may be more than one solution of course). Hence a solution, or a range of solutions, can then be selected as the goal of the cluster of processes. 

A method of this sort could be employed for other mathematical properties of numbers. Hence we could extend the tool box of flagging processes indefinitely, A, B, C, D, E....etc. Each of these labels represents a process which generates a particular kind of number. So we could then make declarations of intent Thinknet style such as:

[A B C  D] 
1.0

This expression represents a simple declarative program for finding numbers with all the properties flagged by A, B , C and D. This search is a two stage operation: Firstly the configuration A B C D represents the operation of forming halos of numbers flagged with their respective properties.  The square brackets [ ] represents the operation of making a selection of those numbers which simultaneously have all the properties that the processes A, B, C and D flag.

In analogy to Thinknet we could further sophisticate this simple language by allowing nesting; that is:

[[A B] C D]
2.0

...where the nest [A B] is solved first by selecting a set of solutions which have properties flagged by both A and B. Processes C and D then run, flagging their respective properties. The outer brackets are eventually applied to make a selection of numbers which simultaneously have all the properties flagged by A, B, C and D.  It is likely that different bracketing scenarios will come up with different results.  Hence in general:

[[A B] C D] != [A B [C D]]
3.0

No doubt there is a lot of scope for sophisticating this simple declarative language further - for example we could have processes which seek numbers which do not have certain declared properties; these are what you might call negation processes: Hence we might have:

[A !B]
4.0

...where !B means numbers that don't have the property flagged by the process B.  Other logical operators could no doubt be added to this language. 

Clearly the details of the processes A B C etc. are procedural; that is, they must be programmed in advance using a procedural cause & effect  language.  Nevertheless,  the foregoing provides a simple example of a declarative language similar to Thinknet where once the procedural work has been done setting up A, B, C... etc, the language can then be used to construct declarative programs. However, simple though this language is, practically implementing it is beyond current computing technology: Firstly it is assumed that working memory space isn't limited. Secondly, the processes A, B, C,.... and [ ] would be required to operate with huge halos of possible candidate numbers; this latter requirement really demands that the seek, flag and selection processes have available to them an expanding parallelism rather than the limited parallel computation of our current computing technology.

In and of themselves the flagging procedures designated A.B, C etc. do not appear to obviously manifest any goal driven behaviour: it is only when A, B, C etc are brought together with [ ] that the teleology of the system emerges. But having said that we must realise that if it were possible to implement the above declarative language on a computer it would in fact be only a simulation of declarative mechanics. For in a practical computer not only would the procedural algorithms defining the A, B and C reside as information in the computer memory from the start, but so also would the code needed to implement the selection process [ ]. Thus a human implementation of declarative computing has to be simulated with cause and effect software. The usual computational complexity theorems would therefore still apply to these strings of code. But although in human designed computers the seek and select program strings are found in computer memory from the outset, this doesn't apply in nature. After all, we don't actually observe the laws of physics themselves; all we observe is their apparent controlling and organizing affects on our observations. Thus physical laws are more in the character of a transcendent reality controlling the flow of events. Likewise if there is such as thing as teleological  selection criteria in our cosmos then they too would, I guess, be a transcendent reality and only observed in their effects on the material world. Nature as far as we can tell doesn't have a place where we can go to see its stored generating programs doing the seeking and selecting. But when we do conventional computing we can only simulate transcendent generation and selection laws with explicit program strings residing observably in memory.


6. Creating/generating Information?
I've schematically represented a declarative computation  as:

[A B C] => Output
5.0

...where A, B, C, are cause and effect processes which (perhaps using expanding parallelism) flag configurational objects with particular properties. The square brackets represent the operation of making a selection of configurations which simultaneously possess the sought for properties flagged by A, B and C. This selection is designated by "Output". 

The computational complexity of the computation represented by 5.0 is measured by two resources:

a) If the above operation were to be rendered in conventional computation there would be programs strings for A, B, C and [ ] which when executed would simulate the generation and selection of configurations. The length of that string would be one aspect of the complexity of the computation.

b) The second measure is the count of operations required to reach the output. If we are simulating the declarative computation using parallel processing then linear time would be one way to measure the complexity, but if we are using expanding parallelism it is better to measure the complexity in terms of the total count of the number of parallel operations. 

In light of this paradigm is it right to make claims about either information being conserved and/or information being created?  As we will see below the answer to that question is 'yes' and 'no' depending on what perspective one takes. 

Firstly let us note that 'information' is, connotatively speaking, a bad term because it suggests something static like a filling cabinet full of documents. In contrast we can see that expression 5.0 is in actual fact a representation of both static information and computational activity. If we are going to perceive 'information' in purely static configurational terms then 5.0 clearly creates information in the sense that it creates configurations and then flags and selects them; the teleological rationale being that of finding, flagging and selecting otherwise unknown configurations which are solutions to the stated problem. So, secondly we note that the creation of  the configurations which are solutions to the stated problem cannot take place without computational activity

Spurious ideas about information somehow being conserved may well originate in the practical problem of the storage of configurational information where storage space is limited: Algorithmic information theory is often concerned with how to map a large string to a shorter string, that is, how to compress the string for the purpose of convenient storage without loosing any information. Here algorithmic information theory reveals some kind of conservation law in that clearly the information in a string has a limit beyond which it can not be compressed further without loosing information. In fact as is well known a truly random configuration can not be compressed at all without losing information. In this "finite filing cabinet" view of information, a view which deals with configuration, (as opposed to activity) we find that the minimum length a string can be compressed without loss of information is fixed; in that sense we have a conservation law. 

But when we are talking about the generation and selection of configurations, configurational information isn't conserved; in fact the intention is to create and flag otherwise unknown configurations. Moreover, we are not talking here about an informational mapping relationship because this activity need not be one that halts but just continues generating more and more complex configurations that fulfill the criteria A, B, C, ...etc. Nevertheless, it may still be possible that some kind of conservation law could be formally constructed if we mathematically bundle together the program strings needed for A, B, C and [ ] along with the quantity of activity needed to arrive at a selection. Hence we might be able to formalise a conservation of information along the lines of something like: 

Program strings + activity = output.
6.0

...that is the computation of the required output is the "sum" of the initial static information and the amount of activity needed to arrive at a result; here initial information and computational activity have a complementary relation. Of course, at this stage expression 6.0 is not a rigorously formal relationship but really represents the kind of relationship we might look for if the matter was pursued. 

As I noted toward the end of this paper in my Thinknet project the declarative paradigm symbolised by 5.0 provides a very natural way of thinking about "specified complexity".  As we have seen the word 'specified' connotes some kind of demand or requirement conceived in advance; hence the word 'specified' has a teleological content, a goal aimed for. 'Complexity' on the other hand will be a measure of both the initial static information and computational activity required to meet the stated goal. These two aspects are more formally expressed in relationship 5.0 where the specifications are represented by properties flagged by processes A, B and C and the complexity of the computation is measured by 6.0.

I would like to moot the idea that expression 5.0, where its form is understood to apply in an abstract and general way, is an important aspect of intelligent activity. This is of course by no means a sufficient condition of the dynamics of intelligence and in fact only represents one aspect, albeit a necessary condition, of intelligence; namely that of goal seeking. 


7. The poverty of de facto ID. 
The de facto ID movement comes with strong vested interests. Firstly there are political interests: North American de facto ID finds itself leaning into the arms the right-wing, although to be fair this may in part be a reaction against some of the left slanting atheism of the academic establishment. Secondly there are intellectual interests. As we have seen de facto ID is all but irreversibly committed to a paradigm of intelligence that is beyond human investigation in so far as they have mooted the concept that intelligence is some kind of oracular magic that cannot be simulated in algorithmic terms. This has naturally fitted in with their dualistic outlook which sets intelligent agency over and against the "natural forces" of nature, forces which are thought of by them to be too profane, inferior and "material" to be at the root of a putatively "immaterial" mind with power to create information in a mysterious magical way. This almost gnostic outlook neglects all the potentiality implied by that fact that nature (which includes ourselves) is an immanent God's creation. 

What may not have helped the de facto IDists is that current scientific attitudes are slanted almost exclusively in favour of a cause and effect view of the physical regime, a regime where there is no room for "final causes" i.e teleology. In the cause and effect paradigm the future is seen to be exclusively written into past events (at  least probabilistically) and little or no credence is given to the possibility that there may be transcendent selection effects waiting invisibly in the wings. 

As we have seen, without explicitly referring to the dynamics of end results and/or intentionality it is very difficult to define "specified complexity". This has in fact hamstrung the de facto IDists attempts to define it themselves. This has lead N&H to define "specified complexity" in terms of static configurations alone thus neglecting the important dynamic aspect of information generation. According to N&H a configuration is judged to have Algorithmic Specified Complexity (ASC) if it has a high improbability but with a low relative algorithmic complexity. Once again they've non-noncommittally hedged on the concept of intelligence by placing it beyond analysis into the hidden libraries of relative algorithmic complexity. The result is a definition of specified complexity that is full of loopholes as we have seen: At best it can identify complex organisation. But as we have also seen this doesn't always work and moreover ASC isn't as strongly conserved as they claim it is; it is poor definition that is far from robust and gets nowhere near identifying the presence of intentionality. Their problem traces back to the fact that they are trying to identify the operation of intelligence without cognisance of the dynamics of intentionality and goal seeking.  

The intrinsic configurational properties of an object such as an unexpected degree of ordered complexity are not reliable predictors as to the operation of intentionality; they may be just incidental to the cause and effect processes.  When we look at objects of archaeological interest, whether they be complex or simple, we look for evidence of intentionality. But we can only do that because we are human and have some inkling of the kind of things humans do and the goals that motivate them. In archaeological work, as in police work, attempting to identify the presence of purpose, (that is identifying an underlying teleology) is a feature of that work. It is ironic that the atheist Joe Felsenstein should spot the inadequacy of N&H's definition to cope with even the pseudo-declarative nature of standard evolution. 

The fact is N&H haven't really grasped the concept of specified in information in terms of a dynamic declarative paradigm and therefore have failed to come up with a useful understanding of the intelligent activity. Given that their vision goes no further than procedural algorithmics and configurational compressibility (connections where the information is in one sense present from the start) it is no surprise that they think that information, which they perceive as a very static object,  is conserved. In contrast the whole rationale of intelligent action is that it is a non halting process forever seeking new configurations & forms and thereby creating them. Intelligent output is nothing if creative. This is what I call intelligent creation


***


Summing up
As we have seen even an atheist like Joe Felsenstein is tempted to accept that "specified information" makes little sense outside of a teleological context and that is why evolution is conveniently conceived in pseudo-teleological terms - that is, in terms of its end result - when in fact with evolution, as currently understood, all the cause and effect information must be built in from the outset. Of course, for a true-blue atheist any teleological rendition of evolution is at best a mental convenience and at worst a crass anthropomorphism.  For myself, however, I have doubts that even given the (procedural) laws of physics there is sufficient built-in information to get to where we are now biologically with a realistic probability after a mere few billion years of the visible universe. Hence, I'm drawn toward the heretical idea that both expanding parallelism and that transcendent seek and selection criteria are somehow built into nature.

Do these these notions of the teleology of declarative problem solving help to fill out the details of the mechanisms behind natural history? If we understand "evolution" in the weaker sense of simply being a history of macro-changes in phenotypes and genotypes, then what goals are being pursued and what selection criteria are being used apart from viability of form? How are the selection criteria being applied? What role, if any, does quantum mechanics have given that it looks suspiciously like a declarative process which uses expanding parallelism and selection?  l'll just have to wait on further insight; if it comes. 

Friday, May 29, 2020

Signalled Diffusion Book V: Complex Drift-Diffusion Signalling


Book V of my "Signalled Diffusion" project can be picked up here. All the other books can be picked up from this post. Below I reproduce the introduction to Book V. 

Introduction

This book rehashes and enhances some of the mathematics that can be found in Gravity and Quantum Non-Linearity and Gravity from Quantum Non-Linearity. In particular it redoes the mathematics leading to the complex non-linear equation numbered in the text as 15.1. It also probes a bit further into the nature of the sub-microscopic signalling regime that gives rise to this equation. As I have said on several occasions this is a highly speculative personal project that in all likelihood amounts to little more than a walk down an obscure track in the woods to nowhere special. But one takes a walk not because it necessarily is going to end up somewhere significant but instead one enjoys the walk for its own sake. It is our duty and yet also our pleasure to enjoy the highways and byways of creation and creativity.

Thursday, May 21, 2020

Contradictions, The Academic Establishment and Matt Ridley.

Complete freedom entails freedom to undermine freedom. 


The content behind the word "Libertarian" is problematic. At one time the far left claimed this content: Libertarianism's implicit anti-government and anarchist connotations were comfortable concepts with the far left: In Marxist eschatology a centrally managed state run socialism was supposed to eventually give way to a decentralised stateless communism; in Marxist theory the state really only serves the function of protecting the interests of the ruling class; therefore once this class was done away with no state would be required - so they thought*.  It is huge irony, then, that today the "libertarian" sentiments have been taken over by the far right whose lack of influence (up till now!) makes them naturally suspicious of central government intentions. They also affect to believe that decentralised market choices and the entrepreneurial spirit are the best expression of democracy; maybe the only valid expression of democracy (See this wiki page for more on the subject of Libertarianism)

But "Libertarianism" with  its connotation of freedom, freedom of choice, freedom to exercise responsibility to build a successful life, freedom of speech and above all a fancied freedom from government has inherent contradictions  For in a world full of zero sum games we have more often than not this constraint:

 My freedoms + your freedoms  = constant

That is, too much freedom for me may subtract from your freedoms. Freedom then is about balance & community, and good community means taking into account the freedoms of others.  This is just as true of so-called "free speech" as it is for access to material resources: A vociferous campaign of free speech against another party can curtail their freedoms. Language can be used as an instrument of coercion; that becomes especially clear when we remember that social connection & status are among humanities strongest motivations and speech is the first port of call to be used to assert pecking order. Absolute "free speech" is a contradiction if we are to respect community.

The anti-government stance of extreme "libertarian" leftists and rightists is an affectation: When they claim to be anti-government what they really mean, of course, is that they are anti status quo and anti-establishment; they are in effect anti those institutions of state over which they have little influence. If the revolutions which they aspire to ever took place you can rest assured that these extremists would soon install the strongest forms of government in order to coerce and maintain their vision of society i.e a dictatorship:  As I said in my last blog post:

Looking at the mix of potential plutocrats, domineering characters and the well armed quasi-militias (in America) who make claim to the name "libertarian" it is easy to imagine a would-be-dictator arising from their ranks. And it wouldn't be the first time that "liberty" and "hegemony" have walked hand in hand; let's recall the outcomes of the English civil war of 1642, the French revolution of 1789, the October revolution and Mao's China. Idealism and hegemony are closely linked.

It is likely that Ayn Rand's vision of a sociopathic "libertarian" idealism, if implemented, would very quickly lead in this direction. I've got more than a sneaky feeling that the putative libertarianism of left and right is motivated by a mix of misguided idealism and sour grapes: i.e. those who want power or want more power want the status quo to move over...or else.

So with this background in mind I thought I'd have a little walk over to Matt Ridley's website to see what he's saying about covid-19.  After all Ridley styles himself (unwisely in my opinion) as a  "libertarian" whatever that abused term actually means in his case. Moreover, covid-19 has rather curtailed the freedoms of many and some extremists on the right are quite sure this is a well orchestrated deep government plot (or conspiracy) to suppress people rather than being just another black swan afflicting humanity.

So was Ridley going to join the Trump supporting conspiracy theorists? Well no, he's far too clever for that I'm glad say. In fact in reading his blog I found a lot of good and intelligent stuff there that I wouldn't want to take issue with and could recommend. But there remains the question of which tribe, if any, does he identify with? There are to my mind indicators to be found in his writings that he identifies with the tribal right-wing. Here are three examples where Ridley betrays his right-wing tribal sympathies:

Example 1
Take this blog post here where Ridley discusses the apparent slow down in technological advance in various industries, an example being aircraft: I had long noted this one myself: My father's life time saw progress from the first rickety bi-planes right through to space travel. But in my life time jet aircraft, although more refined and complex, seemed to have plateaued in their performance envelop. Manned space travel has also plateaued in my time. The same is probably also true of automotive technology. I put this down to the limitations within the platonic world of technological configuration space which is constrained and controlled by a physical regime over which we have no power to change. Delving into  this space is a bit like mining for gold; there comes a point of diminishing returns where more and more effort is needed to get out less and less out. Consider for example computerisation; Moore's law applies for a while and fast progress can be maintained initially, but not indefinitely. For we know that there are physical limits on what can be stored and processed using the current physical paradigm. If we are to do better, new (and often unforeseen) technological breakthroughs are required. There could be another revolution in computerisation if breakthroughs in quantum computing take place. Likewise we would see huge market changes if there are ever breakthroughs in portable fusion energy, zero point energy or anti-gravity; in fact such changes would likely require new and revolutionary understandings in theoretical physics to be made first.

I'd be the first to admit that market catalysed innovation and wealth can be suppressed and/or discouraged by cultural and political factors. But for a right wing trouble-shooting political animal like Ridley politics is his first of port call: In Ridley's mind, not to mention the minds among his class affiliation, bad government regulation is the usual suspect suppressing progress. That the platonic world of configuration space has an important bearing on progress hasn't come into his consideration here. Ridley's "libertarianism" sets him up for default which means that government regulation of business must come under first critical scrutiny. But if Ridley and his tribe, as they make a grab for wealth, think they can leave the poor as a trickle-down-after-thought then they are encouraging alienation & disaffection, and handing society on a plate to the revolutionaries.

Example 2
Let's now look at a blog post by Ridley on covid-19. The post is largely filled with sensible and informative observations - it's worth reading. But Ridley may well betray his tribal affiliation when he gets to this:

,…. This idea could be wrong, of course: as I keep saying, we just don’t know enough. But if it is right, it drives a coach and horses through the assumptions of the Imperial College model, on which policy decisions were hung. The famous ‘R’ (R0 at the start), or reproductive rate of the virus, could have been very high in hospitals and care homes, and much lower in the community. It makes no sense to talk of a single number for the whole of society. The simplistic Imperial College model, which spread around the world like a virus, should be buried. It is data, not modelling, that we need now.

Once again the Ridley is found rubbishing the establishment, this time the (undoubtedly left leaning) academic establishment. Ridley's response here is very reminiscent of the right-wingers I mention in this blog post  where we find these right-wingers expressing suspicion of "modelling" and even going as far as to suggest that modeling isn't science; rather they want something "empirical"! The right-wingers I mention in that post are so stupid as to be unable to see that modelling is all about modelling empirical reality and therefore in science modelling and data go together like coach and horses.

But the problem Ridley and his tribe have is that "modelling" usually comes out of university theoretical departments. The right-wing tribe, as a rule, don't like university departments because they don't have too many allies in that sub-culture, a sub-culture which is not particularly motivated by profiteering and market choices, but whose income is pretty much tied to taxation; i.e. universities are a department of government! Therefore they must be bad!

Of course we never know enough and we always need more data but that doesn't stop the building of models which attempt to join the data dots we do have in order to understand that data. That's what science is about: i.e. building and testing models: No model, no testing and therefore no science.

Seldom, if ever, are models anything other than approximations and simplifications of a more complex reality.  But what's the point of accumulating more data if one then doesn't use that data to update, enhance and sophisticate one's models? As Hume showed data samples in and of themselves are meaningless and useless; what makes that data cohere are the underlying ideas we have about that data (i.e. models). Only models can give us a chance of making predictions; an inventory of disconnected data can't do that because as soon as one makes predictions using that data  one has necessarily moved over into the realm of interpretation and models.

A few minutes of mathematical jiggery-pokery is all that is required to come up with our first crude covid model: The exponential growth in time G(t) of a breeding organism is given by:

G(t) = Exp[ai log(Ri) t]
1.0

...where Ri is the R-value for a the ith demographic and ai is a constant which typifies the time between "multiplications" and t represents time.  Crude simplification though it is, equation 1.0 nevertheless is very instructive and points in the direction of where to go for refinements. It tells us that the R-value for a demographic is uselessness without ai. The R-value for a demographic will not likely be the same for each transmission but like ai, Ri  is merely a typical value, a value averaged over some presumably normal distribution.  The model that equation 1.0 represents can be made more sophisticated by adding more "i" terms as data comes in about those demographics.

The above equation is the result of a few minutes mathematical deliberation by a non-expert; so if I can do this in a few minutes you can be sure that the bright sparks at Imperial College have got the time, space and aptitude to do a lot, lot better. Of course there is always room for criticising and enhancing the most sophisticated of models - but the modellers at Imperial will be well aware of that too!  In any case the R-value averaged over a variety of demographics does give us some indication of the realities although if substituted into a single demographic equation like 1.0 it wouldn't return very accurate predictions. Even better than simplifying analytical equations is to carry out as near as possible a very literal simulation inside a super-computer.

Unless wholly misconceived models should not be buried in favour of meaningless lists of data, especially if the model is at the very least a first approximation. Approximate models are the starting point and foundations on which more sophisticated models can be built and their subsequent predictive value is a measure of how close they are to converging on a depiction of reality. To my mind it's a good thing Imperial College's model has spread across the world - the more hands-on-deck critical analysis (and subsequent enhancements) it gets the better.

Now I'm sure a guy as bright as Ridley really understands all this, so what's his little game? My guess is that Ridley, as might be expected of the tribe he has thrown his lot in with, just doesn't like left leaning universities and the stuff which comes out of their tax funded departments. So Ridley has to make the kind of noises needed for his tribe and so scepticism of academia's models is something they like to hear about. All this is of a piece with Ridley's scepticism of the academic establishment's climate change models.


Example 3
Ridley's right-wing tribal affiliations and credentials were confirmed when I spotted this blog post where we hear about Ridley's audio appearance on the show of conspiracy theorist and Mormon Glen Beck. Beck isn't quite in the same league as batshitcrazy Alex Jones although not that far from it. According to Wiki:

During Barack Obama's presidency, Beck promoted numerous falsehoods and conspiracy theories about Obama, his administration, George Soros, and others.

Writer Joanna Brooks contends that Beck developed his "amalgamation of anti-communism" and "connect-the-dots conspiracy theorizing" only after his entry into the "deeply insular world of Mormon thought and culture".

But I'm glad to say the conversation Beck had with Ridley was worthy of Ridley's intelligence and didn't plumb the depths of Beck's aptitude for daftness: In their conversation there was no hint that covid-19 is anything other than a natural disaster that we need to cope with as best we can. In contrast, however, there are numerous references to conspiracy theorism throughout Beck's Wiki page and this conspiracy theorism seems to be what Beck is really all about. So what was Ridely doing on this show? There's only one answer to that question that I can think of; namely, Ridley's right-wing tribal affiliations mean that his social connections make the Glen Beck show a natural stage for performance because he's not likely to get polled for authoritative comment by "leftist" institutions (like the BBC?!) So where else does he go?


ADDENDUM 1 June 2020

As this post is about the contradictions found in right-wing tribalism I must make note of the paradox of Ridley's promotion of economic Darwinism; I'm not going to read it, but I'm fairly confident that this Darwinist slant is the world view out of which Ridley's book "The Rational Optimist: How Prosperity Evolves" emerges. Moreover, I'm sure Ridley's thesis chimes well with Ayn Rand's sociopathic philosophy. Needless to say a Darwinist line of thought would not go down well with the Christian right-wing who either support young earthism or de facto Intelligent Design. And yet economically and politically this is who Ridley is in bed with.




Footnote
* They also thought that since a communist society was supposedly "classless"and a place where everyone's interests were supposed to harmonise & coincide there would be no more social strife (!). Tell that to the marines!

Sunday, April 26, 2020

Intelligence, Oracles, Magic and Politics


The de facto ID concept of intelligence.

As I have remarked many times on this blog the de facto Intelligent Design movement affects to leave the internal details of the "intelligence" they believe to have stepped in and directly created life as a mystery. There is some justification in this policy: When handling great mysteries (e.g. Divinity) caution is sometimes the better part of valour and so it may be best to proceed apophatically; that is, to define the mystery in terms of what it isn't. An apophatic approach to intelligence seems to be stock in trade of the de facto ID community in North America. In fact as far as I can tell the mainstream IDists believe that the intelligent agent which created life is neither explicable in terms of so-called "natural forces" or even for that matter any process which has the potential to be expressed algorithmically no matter how complex that algorithm may be. I find their views a little ironic: As many of them make claim to a Christian faith one might think that those so-called "natural forces" which we as Christians believe to be God's sublime Creation may hold one or two surprises for us as to what these "forces" (under Sovereign management) can do; after all, Quantum Mechanics alone has left enlightenment humankind thoroughly perplexed as to what it all means (For a start it is no longer meaningful to talk of matter as having identity of substance; identity comes via configuration). But no, in the de-facto IDist world  the "profane natural forces" vs "sacred intelligent agency" dichotomy is their habitual thesis and anti-thesis. In their view "matter" is too debased and inferior to be a secondary source of the dignified sublimity of  mind.

So, in the light of all this I was not in the least bit surprised to find a post on the de facto Intelligent Design website Uncommon Descent with links to ID material  giving the clearest evidence I've yet seen that de facto ID prefers to think about true "Intelligence" as a property tantamount to a magical power, setting it apart from anything else we encounter in this world*1. The UD post in question alerts us to one of de facto ID's gurus who is attempting to identify human intelligence as having the ability to act as a "partial halting oracle". That is, it is assumed that human intelligence is an oracle which can in some (but not all) cases solve the halting problem. According to Wiki. the concept of an "Oracle" as used in computational theory is defined as follows:

An oracle machine can be conceived as a Turing machine connected to an oracle. The oracle, in this context, is an entity capable of solving some problem, which for example may be a decision problem or a function problem. The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. The oracle is simply a "black box"  that is able to produce a solution for any instance of a given computational problem:

A "black box" capable of doing the right thing sums up those inscrutable oracular powers. This manoeuvre by an IDist guru well and truly places the essence of intelligence all but beyond analytical probing *2. As I have said many times before the de facto IDist's preference for an esoteric notion of intelligence traces back to their use of their "explanatory filter" which once it has been used to settle on intelligent agency as the cause of a pattern doesn't really allow one to proceed much further. This of course contrasts with my own approach to intelligence which doesn't resort to super-analytical processes; well nearly: In my Thinknet project I see intelligence as a teleologically driven search process by a "Thinknet" like system. Thinknet systems are potentially chaotic which means that they can amplify those quantum ambiguities up to the macroscopic level, ambiguities which if they remained un-collapsed would give us people who could be in two places at once. Well, we can't have that at the macroscopic level so if the mind is constantly collapsing those wave-function, then, I tender, it is this process of constant collapse which generates consciousness.  But if the mind amplifies those apparent random collapses  up to macroscopic level there is therefore the potential for it to manifest that great incomputable - absolute randomness; so in that sense mind has an incomputable aspect to it. Nevertheless, what I'm proposing is no blackbox concept of intelligence: I'm working on a notion of intelligence that is much more resolvable than ID's magical oracular black box and this is why I have to sophisticate the explanatory filter.

Turning to my subjective perspective on my own thought life I must say that it certainly doesn't feel like some magical oracle able to coolly solve a problem just like that! In contrast problem solving requires the hard graft of mental searching as one attempts to make connections which lead to solutions. To me my thought life feels much more like the seek, reject and select trial & error grind of a Thinknet search than it does ID's magical oracle where genius solutions just pop into the head. I see the hard work associated with thinking as a consequence of the overheard incurred by using a very general-purpose thinking system with a general purpose connectionist language to solve the generic problem; as this system is a jack-of-all trades-problem-solver it can be slow at solving specialised problems as it has to first translate the problem into its connecionist terms.

I don't have a strong claim to having clinched the essence of intelligence anymore than do the defacto IDists. But like myself they have just as much right to investigate an avenue of possibility in their search for what intelligence is about. In fact I believe their presence is a good thing; the more people investigating different avenues the better. For all I know the IDists might be right! Also, like the IDists I believe that intelligence of some sort underpins the nature of the cosmos.  So under any other circumstances I would applaud the IDists efforts at tentatively trying to move forward with something new; after all that's science for you.  But I'm afraid in this case I can't applaud. Why is that?

***

Well, the answer to that is politics especially the politics in North America. It's the catalyst that has precipitated and hardened a "natural forces" vs "intelligent agency" polarisation. The IDists are persona non grata among the academic establishment and so it is no surprise that these IDists have been tempted to put all their eggs into the "intelligence-is-magic" basket in order to batter academia's evolutionary and algorithmic rendering of the processes of life, processes the academics believe to have generated human intelligence. Some times I wonder if the de facto ID people aren't really being serious with their proposals and simply come up with their stuff just to rile the academic establishment!

But the politics doesn't stop there. IDism is all part of a greater right vs left wing tribal conflict which means that the right wing sharply disagree with the government tenured academics over one or more of a set of well contended issues (as mentioned in my last blog post): e.g. vaccinations, climate change, gay rights, deep government conspiracy theories, the regulatory role of government, the covid-19 lock down, hyper-market libertarianism, gun rights etc. The common underlying theme running through all this is the diffidence right-wingers have toward central government interventions; no!, make that the status quo interventions:  When it comes down to it the right-wing is just as capable of supplying individuals of totalitarian inclination as any other human sub-culture, if not more so. Do you think those characters one finds in America's quasi-militias would have the slightest respect for the argumentative cut and thrust of an authentic parliament? Unlikely: More to their taste would be for one of their plutocrats to do a Cromwell and clear parliament using AR-15 armed thugs.

Crackpot daftness can be found on the extremes of both left and right, but my argument here is with the right-wingery of the de facto ID community. Right wing sentiments ultimately drive their all but exclusive commitment to an Oracular paradigm of intelligence. They've backed  themselves into the cramped corner of this paradigm because they are suspicious of those government tenured academics who for the most part will get rubbed up the wrong way by de facto ID's support of oracular intelligence.

The republican language coming out of England's 1642 civil war fed into the American war of independence (from tax) and now the North American right-wing endlessly recapitulate the sentiments of this language Viz: interference coming from a tax funded government is at best regarded with suspicion and at worse as evidence of a deep government conspiracy.  For example, on Uncommon Descent one can find references to "climate change alarmism" and also "covid-19 lock down alarmism". The emotive term "alarmism" is the keyword expressing right-wing apprehensions about projects largely emanating from government sponsored tax funded bodies. In my view coordinating the social responses to the black swans of climate change and covid-19 requires centralised information and control; such a response is well beyond the powers of the sluggish market with its distributed blind-watch-maker decisions. But such government involvement is the right-winger's worst nightmare come true; especially if government should muff it (which they often do!)

The pretext supporting the "libertarian" polemic about covid-19 and climate change is, however, entirely plausible if not sound: The world's wealth generating markets could be so affected by central government policies that it causes huge economic hardship or perhaps even an apocalyptic economic collapse. But this line of argument cuts both ways. Covid-19 and climate change, if left to run their courses, could conceivably also cause economic collapse. Moreover, the right wing's emotive language can be used against them: One might accuse them of promulgating "economic hardship alarmism", or "totalitarian new world order alarmism". Both sides are faced by the same dilemma: The  fix may be worse than the problem!

Whilst I strongly reject the border-line Marxism and anti-theism found among some academics, neither can I support the right-wing affectation for so-called libertarianism. Libertarianism is to the free market as fundamentalism is to Christianity; they are the kiss of death for the things they purport to uphold. Sociopathic libertarianism is a source of social disaffection thus helping to serve up a discontented society on a platter to either Marxist or right-wing dictators. For example, allowing covid-19 to take its course is likely to strike harder among the poor than the rich and therefore this solution to our problems is readily perceived as the solution in favour of the rich. Moreover, self-branded "libertarianism" with its connotation of "liberty" comes under the heading of "self-praise is no recommendation": Looking at the mix of potential plutocrats, domineering characters and the well armed quasi-militias (in America) who make claim to the name "libertarian" it is easy to imagine a would be dictator arising from their ranks. And it wouldn't be the first time that "liberty" and "hegemony" have walked hand in hand; let's recall the outcomes of the English civil war of 1642, the French revolution of 1789, the October revolution and Mao's China. Idealism and hegemony are closely linked.

The many wildcards of socio-economics don't stop some people thinking they are clever predictors and planners. The open-endedness of socio-economic systems is a bottomless pit of new data that can be cherry-picked and tailored to support the favoured planning polemic. In a chaotic world human beings are necessarily complex adaptive systems and therefore by definition much better opportunists than they are planners. They make their decisions and take their opportunities on the hoof. Like other biological organisms society is a mix of central as well as distributed control and this mix no doubt better suits a chaotic world where black swans create new problems and at the same time deliver otherwise unforeseen opportunities. But the time honoured overriding concern of human beings is that of hanging on to the immediacies of survival at all costs and that's probably why many people favour social distancing rather than the long shot of saving an abstract economic system that more likely favours lining the rich man's pockets in his ivory tower before it gets to line your pockets (if you've survived covid-19!). While there's life there is hope, hope that the new opportunities open up into vistas of  fruitful originality and prosperity.  We can only plant and water; it is God that gives growth.


POSTSCRIPT 
27/4/20

In a post on Uncommon Descent that I wouldn't necessarily want to take issue with we find an interesting comment from a character called "Polistra". Viz:

Polistra April 26, 2020 at 2:48 pm
This is silly and illogical. It wasn’t the virus that stopped the world.
The virus just wandered around and found tissues to infect, and the humans who own the tissues killed the virus using standard weapons and tactics. A very few humans were unable to maintain the war, and they lost.
The world was stopped by GOVERNMENTS. The virus was just the latest fake “reason” for stopping the world.

This commentator doesn't like the fact that the UD post suggests 900 bytes of covid-19 DNA is the reason why the world has shut down. Polistra clearly wants a much clearer statement that the culpability lies with GOVERNMENTS.  Polistra doesn't tell us why governments want to shut the world down with what he calls a "fake" reason any more than flat earthers will tell us why the UN wants us to believe in a spherical earth instead of their flat earth. Although I don't think most UDers would go along with this kind of conspiracy theorism it's probably significant that they don't challenge him: He's one of them, he's part of their anti-government tribe!  The irony is, as I have already said, that it's so easy to see dictators readily emerging from the ranks of the domineering fanatical right wingers if they should ever get power.


Footnotes
*1 I'm not quite sure how this works out with human beings, objects which from a third person perspective are observed to be entirely a product of  complex organisations of  God's atoms.

*2 Turing's halting theorem and Godel's incompleteness theorem are closely related in that both use  the "runaway self-referencing" reasoning found in the diagonalisation procedure. Roger Penrose proposed that the human ability to understand Godel's argument proved that human thinking was an incomputable process. Hence Penrises ideas are also favoured by IDists. Whilst it is wrong to dismiss Penrose outright I have submitted my reasons why I don't follow him down this particular avenue..

Saturday, April 18, 2020

Anti-Science or Anti Academic Establishment?


They'll love Mars then!

Since the 1960s Western Christianity, especially among the liberal academic and intellectual elite, has become increasingly marginalised. Although this drift undoubtedly predated the 1960s, the cultural marginalisation of Christianity by intellectual trend setters has, to people on the ground like myself, become noticeably more pronounced since the 1960s. The Christian reaction among those with fundamentalist tendencies was and is to counter this cultural shift with a loud proclamation of contrariness; although this contrariness is probably less caused by fundamentalism than it is the definition of fundamentalism; feel marginalised? Just shout louder! This contrariness expresses itself through one or more of a motley collection of shibboleth issues such as anti-vaccination theories, anti-climate change theories, anti-gay rights, young earthism, flat earthism, a huge variety of conspiracy theories usually involving "deep government", fear of government, anti covid-19 lock down, extreme market libertarianism, promotion of gun rights and above all a general identification with the tribe of right-wing of politics: I would not want to call all these people "conservative" because some of them advocate quite extreme un-conservative, anti-science ideas. (e.g. flat earth and other conspiracy theories)*

Although there are some overtly anti-rational Christians who openly embrace fideism many of the aforementioned right-wingers like to make claim to scientific legitimacy to give some kudos to their case.  But because scientific epistemology is so often unhelpful to their theories the only way forward for them is to portray a distorted view of science before they can enlist it in support of their views. A common corruption of science which I have commented on many times is the false view that there is a distinction between observational science and so called historical science which is supposed to have no observational support. This concept falls over because no scientifically proposed object is really ever directly observable: What makes the crucial difference is not some bogus distinction between empirical science and non-empirical science but the fact that objects of scientific study have varying epistemic distances; this means that those objects have varying amenability to their structures being populated with observational protocols.

But rather than accepting that there is a sliding scale of epistemic amenability on scientific objects many right-wingers like to promote the notion that there is a sharp distinction between  true science which is supposed to be thoroughly empirical and science they don't like which they claim isn't (very?) empirical. This distorted concept of science is then mobilised in an attempt to de-legitimatise science that is inconvenient to the right-wing world view. As I have recorded before on this blog this polemical technique is very often employed by fundamentalist theme park manager Ken Ham. In fact Ham's tame astronomer Danny Faulkner has spent so many years as an apologist for Ham's theme park that it seems to have addled his thinking about scientific epistemology; see for example this post of mine where I charge Faulkner with having a debased and caricatured view of scientific epistemology. Faulkner thinks that science is about what can be detected with the five senses. Well yes, science is about the five senses but very little in fact can be detected directly with these senses. The senses simply provide a limited sampling window on the complex but otherwise rational objects of our cosmos, objects which are for the most part well beyond our senses. The only reason why our sensorial "key-hole-view" works is (in my opinion) because God has created a thoroughly rational,  ordered world and therefore readable world. Reading this world is like reading the sentences of a rational person**. Formal science works and works well. Praise be to God Almighty!

Another example of a right-winger who somehow thinks that true science should be "empirical" can be witnessed in this blog post of mine where Brian Cox clashes with Australia senator and conspiracy theorist Malcolm Roberts.  Roberts is unwilling to accept computer climate modeling and Cox has to labour the scientific point that modelling is the only way to anticipate the future of the Earth's climate. Roberts' claim that the models don't work empirically (which is debatable) is not backed up by way of alternative, better models, tested against his pretensions to empiricism. It seems that Roberts simply doesn't accept esoteric modelling as part of the valid scientific method.  I don't know what he thinks he's going to do with all this empirical data he makes claim to if it isn't used to help build and test a model. In any case let's beware of the "alternative facts" of those who have swallowed conspiracy theorism as a world view.

Finally another example of an anti-science right-winger has come to light in a post by PZ Myers where Myers quotes a Tweet from Republican John Carnyn who is even clearer in his denial of modelling as valid science: 



The Tweet reads:

After #COVID19 crisis passes, could we have a good faith discussion about the uses and abuses of "modeling" to predict the future?  Everything from public health, to economic to climate predictions.  It isn't the scientific method, folks. https://en.wikipedia.org/wiki/Scientific_method

Cornyn obviously hasn't read his own Wiki reference and he consequently gets the mauling he deserves from Myers and his following. Their criticisms are along the lines you'd expect from professional science people: You just can't move in science without creating a model of some sort and testing it formally against experimental results: No model? Then nothing to test and therefore no science.  Every department of science, and in fact even much of our day to day living, involves the tense and sometime contentious dialogue between our concept of how the world works (i.e. our mental on-board models) and our experience. We all use an informal version of science: That is we all have some kind of anticipation about how the world works (i.e. a "model", which maybe constructed from the sampling of previous experience) and then bring that anticipation into dialogue with experience. This, I propose, is even true of religions although let's just say that sometimes theology tends to be more creative, metaphorical, seat-of-the-pants and free format than the science of the relatively simple very regular objects of spring extending and test tube precipitating science; no surprise, then, that sometimes the gaps and ambiguities in the theological account are filled in with authoritarian fulminations of the raving fundamentalist.

My own guess as to what really drives the right-wing anti-science agenda is a paranoid counter cultural malaise which smarts under the realisation that they have little influence and credibility among the academic establishment elite. What's worrying, however, is that in America some of these right-wingers are armed to the teeth and may start shooting if they don't get their way. 

Bang!, bang! bang! bang! You'd better dance to the tune of the AR-15!

Fortunately I think we are dealing with a fanatical minority here -  at least I hope so. 


Relevant links.

See also
https://quantumnonlinearity.blogspot.com/2011/06/cloistered-academics-vs-punks.html

See also the link below to the de facto ID website Uncommon Descent where we find a video that is ignorant of the status of the second law of thermodynamics:
https://uncommondescent.com/intelligent-design/when-scientists-ignore-science-by-mark-champney


Footnotes

* Some "New Agers" seem to be going down a similar road to Christian Fundamentalists especially regarding conspiracy theorism, and anti-vaxing. They have a similar attitude to academia as do fundamentalists.

** This assumption of a rational regular world appears to break down in paranormal connections. In paranormal circumstances the world, locally at least, slips into an almost dream state (cf "The Oz effect"). These experiences form muddled erratic patterns that are the anti-thesis of a testable regular reality. The paranormal is a breakdown of rationality, a kind of storm of delirium in the usually regular fabric of reality. Hence the great difficulty of attempts to get an epistemic handle on the paranormal. Paranormal experiences do, however, seem to have some kind of loose associative/connotative/Freudian meaning not unlike dreams