Wednesday, September 22, 2010

Free Will and Determinism


Scientific investigations into choice

Somebody recently asked me the following question:

....what about free will, how do you think that plays in, given that neurons and synapses are Newtonian in that they are composed of a set of functions/algorithms that can used by neuroscientists to describe their patterns?

Here is my brief answer:

We all have the first person experience of making choices. But when these choices are examined from a third person perspective they can only be experienced as “external observer” perceptions, perceptions upon which the third person bases his theoretical constructs such as neurons and synapses. The first person “I story” is one of a complex stream of conscious qualia but the third person “You story”, which is the story of “me” when seen from the perspective of “you”, is a story of the dynamics of aggregates of material elementa. (although there is nothing to stop the third person "empathising in" the first person perspective from the "You story")

I’m not at all adverse to the idea that the third person perspective of mind, a perspective that only directly perceives mind as the dynamics of material elementa, may ultimately be accessible to a full description by some version of quantum theory. But even if this is true we must realise that quantum theory is merely a descriptive map; that is, a map composed of formal tokens, a map which is not the “thing-in-itself”, but merely a representation of it. For example we may succeed in constructing a perfect computerised simulation of a car using the formal tokens of 1s and 0s, but the simulation is never the car itself no matter how good it is. Likewise it is conceivable that some version of quantum theory applied to aggregates of particles perfectly describes the mind of man, but it can never be the mind of man itself. Though formal tokens of some conjectured complete theory of mind may have a point by point conformity with human consciousness, those tokens are, when all is said and done, only a representation of reality, not the reality itself, a reality that includes a rich first person story of consciousness composed of such things as a sense of purpose, meaning and choice.

The formal theoretical model, for obvious reasons, is not the thing it depicts, but the formal model does bring out one important lesson. Such complex things as “choice” and “purpose” are not going to be found down at the level of small numbers of material elementa but rather they will be found and defined at the aggregate level; that is in the organisation of large quantities of material elementa in all its complex dynamics. Choice and purpose are features of complex configurations of elementa and therefore an intricate object like “choice” is not defined for individual elementa such as neurons.

Important Caveats
When thinking about this subject it is best not to speak in too confident tones about the ultimate accessibility of the human mind to theoretical description. Several caveats must be borne in mind:
1. Quantum Mechanics, as currently understood, only provides probabilistic description. If this situation persists then in the sense I suggested in this post the mind, like any other dynamic aggregate of elementa, is humanly speaking a system that is indeterministic.
2. Even if the mind is accessible to a complete quantum description, it may nevertheless be such a complex organised aggregate that it is beyond the human mind to self-describe itself.
3. We need to keep an open mind about the apparent completeness of our theories: There is no guarantee that quantum theory provides the complete theoretical tool box for the third person perspective.

Tuesday, September 14, 2010

Fullerenes – an Example of Reducible Complexity.


A large fullerene: notice the smattering of pentagonal tiles giving rise to a spherical shape.

Having chanced upon a picture of a fullerene sphere (See above) on the internet, the question of how such structures can come about, given the providence of physics alone, suddenly dawned to me. How could a fullerene form incrementally carbon atom by carbon atom? To ultimately connect up into a sphere the carbon hexagons and pentagons have to tile in such a way as to form a sphere. Thus if we envisage an atom by atom formation of the structure it at first sight seems that the atoms need to somehow know in advance just what fullerene they are constructing so that they can arrange themselves accordingly.

So how do fullerenes form without the hands of some intelligent agent directly arranging the atoms appropriately? The only way I could think of was some kind of quantum tunneling straight through to the potential well of the structure thus allowing it to spring into sight as a fait accompli in one quantum jump. However, this is unlikely because as structures get larger the volume of the multidimensional space they occupy increases rapidly. The sought for structure then occupies such a small volume element in that space that a quantum jump to it is then highly improbable.

However, it seems that Nottingham University has come to the rescue. Using an electron microscopy technique that allows them to actually watch the formation of a fullerene, they have come up with the fullerene formation scenario. A short article on the Nottingham University Web site contains this crucial passage:

In their collaborative study the direct transformation of a graphene sheet into a perfectly formed fullerene ball has been observed for the first time - a process that for more than two decades was thought to be impossible. They observed that the fullerene ball is formed by removing carbon atoms one-by-one from the edge of the graphene sheet. Pentagons of carbon atoms can then form at the edge of the sheet allowing the graphene sheet to curve into a bowl shape. Eventually, as the graphene sheet continues to lose carbon atoms the complete fullerene ball is formed.

(See http://www.nottingham.ac.uk/Chemistry/News/FullereneformationinNature.aspx ) 

It looks, then, as though the "long standing puzzle"  of fullerene “irreducible complexity” has or is being solved; there is, after all, an incremental path through to these structures. The providence of Physics I call that.


A Meccano Fullerene - Pity I don't have enough Meccano strips in my set to make it.

Determinism and Indeterminism

Wolfram's Ideas Have Some Implications For Indeterminism

Somebody recently asked me to comment on the nature of determinism and indeterminism. Here is my answer:

Here is some waffle re. your questions:

Let’s assume every physical thing constitutes a pattern or structure of some kind and that the job of physics is to come up with a description of these objects using generating functions or algorithms. The functions/algorithms used to describe these physical patterns or structures constitute a theory. On this view, then, something like Newtonian theory is composed of a set of functions/algorithms that can used to describe the patterns of the world around us.

If we take on board this rather sketchy model of physics (which is in fact the way I personally understand physics at the moment) some things obviously follow. The class of compact or small algorithms/functions is relatively limited for the fairly obvious reason that small objects are by definition constructed with a relatively small number of parts and therefore these parts can only be juxtaposed in a limited number of ways. In contrast the physical objects theories attempt to describe may be huge in comparison. Thus for obvious combinatorial reasons the number of possible physical objects is far greater than the number of physical theories that can be constructed.

However, the ideas of Stephen Wolfram have some interesting implications here. He believes that the majority of relatively small algorithms can, if given enough time, compute everything that can be computed. If I understand Wolfram aright then this means that if given enough time any given (finite) physical pattern/structure can be described by a physical theory if “theory” in this context is understood as a kind of algorithm or function. Admittedly Wolfram’s ideas are conjectural but if they hold true it means that there are no finite objects out there that cannot be described using a theory. If a theory is given enough computation time it will eventually describe any stated physical system. The problem of the mismatch between the number of succinct theoretical systems and the huge number of physical objects that can be described is surmounted because we allow physical theories indefinite amounts of execution time to come up with the goods.

If Wolfram is right (and I have gut feeling he is), and if we define indeterminism as a physical system that cannot be described with an algorithm or function, then it seems that on this basis there are NO (finite) systems that are indeterminate – all systems can be calculated from a basic algorithm given enough execution time. Therefore, if we want a definition of determinism/indetermism that does justice to the fact that some systems seem less humanly tractable than others we need to change the definition of determinism.

In spite of the fact that (according to Wolfram) everything is in principle deterministic it is clearly not so from a practical human angle. Some objects are, from our perspective, “more deterministic” than others and therefore my proposal is that determinism and indeterminism are not a dichotomy but the two ends of a spectrum.

Some physical objects have a pattern that is highly ordered and we can quickly rumble the algorithm that can be used to calculate it with a relatively short execution time. We can then use the algorithm to “predict” between the dots of observation and therefore in this sense the pattern is deterministic. However, as the pattern becomes more and more complex, locating the algorithm that will describe it becomes more and more difficult because of the increasing difficulty of finding an algorithm that will describe the pattern in a convenient execution time: Most highly complex physical systems will only be generated after an impractical execution time and thus humanly speaking the system is indeterminist. Thus physical systems move from a deterministic classification to an increasingly indeterministic classification as they require longer and longer execution times to be generated by an algorithmic "theory". In this connection I suspect that the randomness of quantum mechanics is, from human point of view, practically indetermistic.

Finally to answer your question:

....is it true that for a model to *completely* predict everything within a system it would have to be at least as large (in terms of information content) as the system it was predicting, or is there a case in which a model could *completely* predict everything within a system but not be at least as large (in terms of information content)?

I’m not going to answer this question in terms of “information content” as we would get embroiled in the technical definition of information, but instead I’ll answer it from an intuitive and human stand point. One of the benefits of mathematical theorising from a human point of view is that in most cases the theory is not as large and complex as the system it is “predicting”. Think of the Mandelbrot set and compare it with the algorithm used to generate it: It is very easy to remember the algorithm but not a bit for bit map of the set itself. If physics is to the structures it predicts as the Mandelbrot algorithm is to the Mandelbrot set then in terms of bit for bit data storage the models of physics, like the Mandelbrot algorithm, are in human terms much smaller than the objects they compute. However, it seems that there are many objects out there, like history and God, that are not humanly theoretically “compressible” and therefore they will ever remain narrative intense subjects.

Saturday, September 11, 2010

Dumb Design

.....coming to a  Dumb Design blog near you

This post by GilDodgen on Uncommon Descent betrays a measure of extremism typical of some parts of the anti-evolution movement. One doesn’t even necessarily have to actually accept evolution to see that Dodgen has a caricatured and distorted view of evolutionists. Here are the salient issues raised in the post:

1. Dodgen was once a militant atheist. That’s the first warning sign: This man isn’t the sort who is going to believe things by halves and probably thinks everyone else should be as highly convicted and polarised as he is.

2. Dodgen titles his post Why Secular and Theistic Darwinists (sic) fear ID. So where does this implied accusation leave Christians like Sir John Polkinghorne, who is an evolutionist and justifies his claim to being an Intelligent Design Creationist because he sees the providence of God in the conditions required to make evolution a fruitful process. Thus does Dodgen try to drive a wedge between Christian evolutionists and their belief in the providence of God.

3. Dodgen is at his worst when he defines evolution in way that would no doubt have warmed his heart when he was an extreme atheist:  God-guided evolution” was an oxymoron, since “evolution,” as defined in the academy and by its major promoters, is by definition undirected and without purpose. Like all theories evolution purports to describe reality using a set of principles that succinctly embody information about the patterns displayed by the structure of reality. As I have often remarked on this blog evolution must be highly resourced by information to work; this information, if it exists, is found in the layout of configuration space. This information provides a probability envelop that can be said to direct evolution in the sense that this envelop constrains what is possible. Only the descriptive information embodied in this envelop is within the terms of reference of evolutionary theory but terms such as “intentional” and “purpose” are not within evolutionary theory’s remit. Once again notice how Dodgen’s comments have the effect of denigrating and defaming Christian evolutionists by accusing them of aiding and abetting those who propose a purposeless, godless universe.

I don’t think people like Dodgen are theoretically competent enough to deal with the matter of evolution, whatever its status. He still has cognitive legacies from his extremist past to cope with and these impair his judgment (in fact he’s still an extremist, so nothing has changed there). If evolution is false then apparently Dodgen simply doesn’t have the where-with-all to equip him for the job of refuting it and he should be fired.

Wednesday, September 08, 2010

The Narrow Cosmic Performance Envelope


The Cosmos must have a very particular performance envelope if evolution is going to get anywhere very fast. (i.e. 0 to Life in a mere 15 billion years)


Brian Charlwood has posted a comment on my Blog post Not a Lot of People Know That. As it’s difficult to work with those narrow comment columns I thought I would put my reply here. Brian’s comments are in italics.

You say //So evolution is not a fluke process as it has to be resourced by probabilistic biases.// so it is either a deterministic system or it is a random system.

I am not happy with this determinism vs. randomness dichotomy: To appreciate this consider the tossing of a coin. The average coin gives a random configuration of heads/tails with a fifty/fifty mix. But imagine some kind of “tossing” system where the mix was skewed in favour of heads. In fact imagine that on average tails only turned up once a year. This system is much closer to a “deterministic” system than it is to the maximally random system of a 50/50 mix. To my mind the lesson here is that the apparent dichotomy of randomness vs. determinism does no justice to what is in fact a continuum.

A deterministic system requires two ingredients:
1/ A state space
2/ An updating rule
For example a pendulum has as a state space all possible positions of the pendulum, and as updating rules the laws of Newton (gravity, F=ma) which tell you how to go from one state to another, for instance the pendulum in the lowest position to the pendulum in the highest position on the left.

Fine, I’m not adverse to that neat way of modeling general deterministic systems as they develop in time, but for myself I’ve scrapped the notion of time. I think of applied mathematics as a set of algorithms for embodying descriptive information about the “timeless” structure of systems. This is partly a result of an acquaintance with relativity which makes the notion of a strict temporal sequencing across the vastness of space problematical. Also, don’t forget that these mathematical systems can also be used to make “predictions” about the past (or post-dictions), a fact which also suggests that mathematical models are “information” bearing descriptive objects rather than being what I can only best refer to here as “deeply causative ontologies”.

A random system is a bit more intricate. It can be built up with
1/ A state space
2/ An updating rule
Huh? Looks the same. Yeah, but I can now add the rule is updating. Contrary to deterministic systems, the updating rule does not tell us what the next state is going to look like given a previous state, it is only telling us how to update the probability of a certain state. Actually, that is only one possible kind of random system, one could also build updating rules which are themselves random. So you have a lot of possibilities, on the level of probabilities, a random system can look like a deterministic system, but it is really only predicting probabilities. It can also be random on the level of probabilities, requiring a kind of meta-probabilitisic description.

If I understand you right then the Schrödinger equation is an example of a system that updates probabilities deterministically. The meta-probabilistic description you talk of is, I think, mathematically equivalent to conditional probabilities. This comes up in random walk where the stepping to the left or right by a given distance are assigned probabilities. But conceivably step sizes could also vary in a probabilistic way, thus superimposing probabilities on probabilities. i.e. conditional probabilities. In the random walk scenario the fascinating upshot of this intricacy is that it has no effect on the general probability distribution as it develops in space. (See the “central limit theorem”)

Anyway, these are technical details, but let's look at what happens when we have a deterministic system and we introduce the slightest bit of randomness. Take again the pendulum. What might happen is that we don't know the initial state with certainty, the result is that you still have a deterministic updating rule, but you can now only predict how the probability of having a certain state will evolve. Now, this is still a deterministic system, the probability only creeps in because we have no knowledge of the initial state.
But suppose the pendulum was driven by a genuine random system. Say that the initial state of the pendulum is chosen by looking at the state of a radio-active atom. If the atom decayed in a certain time-interval, we let the pendulum start on the left, if not on the right. The pendulum as such is still a deterministic system.
But because we have coupled it to a random system, the system as a whole becomes random. This randomness would be irreducible.

This would classify as a one of those systems on the deterministic/random spectrum. The mathematics of classical mechanics would mean that any old behavior is not open to the pendulum system, and therefore it is not maximally random.; the system is constrained by classical mechanics to behave within certain limits. The uncertainty in initial conditions, when combined with mathematical constraint of classical mechanics, would produce a system that behaves randomly only within a limited envelope of randomness; the important point to note is that it is an envelope, that is, an object with limits, albeit fuzzy limits like a cloud. Limits imply order. Thus, we have here a system that is a blend of order and randomness; think back to that coin tossing system where  tails turned up randomly but very infrequently.

So, if you want to say that there is a part of evolution that is random, the consequence is that the whole of it is random and therefore it is all one big undesigned fluke.

No, I don't believe we can yet go this far. Your randomly perturbed pendulum provides a useful metaphor: Relative to the entire space of possibility the pendulum’s behavior is highly organized, its degrees of freedom very limited. Here, once again, the probabilities are concentrated in a relatively narrow envelop of behavior, just as they must be in any working evolutionary system – unless, of course, one invokes some kind of multiverse, which is one (speculative) way of attempting maintain the “It’s just one big fluke” theory. Otherwise, just how we ended up with a universe that has a narrow probability envelope (i.e. an ordered universe) is, needless to say, the big contention that gets people hot under the collar.

Monday, September 06, 2010

Epistemic Humility


The Taleb paradox:  He may be abrasive and arrogant, but he knows a thing or two about Epstemic Humility. See here

A couple of Web posts of interest:

1. Human Cognitive Limits
I was interested in this Times Online article by Astronomer Royal Sir Martin Rees which is entitled We may never discover how the universe ticks. In the article Rees writes:
But we should be open to the prospect that some aspects of reality — a unified theory of physics, or a full understanding of consciousness — might elude us simply because they’re beyond human brains, just as Einstein’s ideas would baffle a chimpanzee

2. The logical limits of a purely descriptive paradigm
This post on Uncommon Descent refers to Mathematician Oxford John Lennox' rebuttal of some of Stephen Hawkin’s recent claims. Lennox is quoted as saying:
But contrary to what Hawking claims, physical laws can never provide a complete explanation of the universe. Laws themselves do not create anything, they are merely a description of what happens under certain conditions.
….which is very much in line with the comments in my last post about the descriptive role of physics; physics can only go as far as a maximally “compressed” descriptive narrative of the Cosmos and therefore provides no ultimate logical necessity. Since the “data compression” operation of physical description aims to reduce content down to the “simple as possible”, that content will not contain its own explanation; it is simply too simple for that. As far as the search for Aseity is concerned physics, with its program of progressive reduction to the elemental, is likely to be proceeding in exactly opposite direction; for there is only one other place to look for self-explanation, and that is in the a-priori complex, probably the infinitely a-priori complex. Therefore, understanding the concept of Aseity may elude us for the same reason Rees has given us: Viz: simply because it is beyond human brains, just as Einstein’s ideas would baffle a chimpanzee.

Sunday, September 05, 2010

Not a Lot of People Know That.



The 64 billion dollar question: Is physics mere description or is it intentional causes?

In this post on Cornelius Hunter’s blog, “Darwin’s God”, Hunter remarks on an insect that appears to have bifocal lenses:

Take a close look at this organism—a very close look. Now answer these questions: Are you an evolutionist? Was this bug created by random mutations? Is it a Lucretian concoction? For evolutionists the answer is yes, all organisms must be such concoctions, and in so saying they are their own accuser—this is not about science.

And then there is the complaint that those mutations really aren’t random. So the mutations knew what to design? Of course not, but, but … But what? Of course the mutations are random with respect to the design. And that is the issue at hand.

Or there is the retort that natural selection remedies all. Those mutations aren’t random at all, they have been selected by a reproductive differential. But of course this after the fact selection does not dictate which mutations should occur. All selection does is kill off the useless mutations. The fact that most mutations don’t work doesn’t help matters as evolutionists imagine, it just reduces the chances of evolution’s miracle stories. The mutations are still random, there are simply fewer (far fewer) of them to work with because most don’t survive.

When evolutionists complain that mutations really aren’t random one gets the feeling that Hunter thinks they are playing against the rules of their own game. But as I have made clear before in this blog, if evolution has occurred in exactly the way conventional evolutionists claim, then for it to produce the results in sufficient time (~ a few billion years) the cosmic system must be highly constrained and therefore far from random; that is, physical laws have to eliminate so many possible states that among the possibilities remaining the class of evolutionary outcomes must be large enough to return a realistic probability of evolution. The physical laws would have to achieve this result by insuring that the class of viable organic structures form a connected set in configuration space (Something I have said many times in this blog). This would imply that this class of structures is not evenly distributed in configuration space; for if they were they would be too thinly dispersed for evolution to jump the intervening spaces between different organisms via stepwise diffusion. This connected object in configuration space would have to be implicitly encoded by the laws of physics. To the ID theorist’s objection that the laws of physics could not encode such a complex, information rich object, my reply is that the alternative object they are proposing is as equally complex and information rich; for, conversely, if as the ID theorists insist the class of viable organisms is not connected (that is, organisms are “irreducibly complex”), then this highly complex information rich disconnected structure would also be implicit in physics. Interesting meta-questions here are these: What subset of elegant physical laws implies a reducibly complex of set of organic structures? What is the size of this subset relative to the class of all elegant physical laws?

I have said things like the foregoing many times in this blog. It amounts to saying this: Whatever way we look at the aspect presented to us by the visible cosmos, we cannot get away from the fact that its probabilities are highly skewed in favour of life, whether or not life has evolved. But it seems that not a lot of people know that; folk opinion is that somehow evolution makes creating organic forms a trivially logical outcome; that is, it is thought of as a kind of “creation made easy” story. But it simply isn’t easy: In the greater scheme of all possible physical regimes, those regimes that favour evolution are so rare as to be miraculous. In fact I submit that the computational complexity of locating a system that entails evolution is far greater than that of directly locating viable organisms. Not a lot of people know that.

The fact is that evolutionary logic has the effect of obscuring the highly weighted probabilities that necessarily resource it. Therefore those who dislike a bias in the probabilities because it smacks of “creation” will naturally be attracted to evolutionary theory because it obscures the bias. So perhaps Hunter is right to complain about those evolutionists who want to have their cake and eat it: They want a system which generates organic configurations that is both random and yet not random.

There is an extreme irony in all this because there is a back handed acknowledgement by anti-ID evolutionists of the ID theorist’s contention that a probabilistic skew betrays the contrivance of intelligent agency, effectively "loading the die". The anti-ID evolutionists find great difficulty in coming to terms with skewed probabilities and thus default to evolution because at first sight it seems to satisfy the requirement of providing a physical system whose probability weightings have not been “rigged” by an intelligence. But, in fact, evolutionary theory has the effect of hiding probabilistic weightings in a huge hinter land of physical logic. For example, when genetic research workers look at what seems be an apparently short evolutionary route between two organism in terms of genetic changes, evolutionary change looks deceptively easy. But what is very easy to miss here is that in actual fact the two organisms are embedded in a huge field of possibilities, a field that must be explored in order to move from one organism to the other. Unless the logic of real physics cuts this space down considerably the size of the exploration space between two organisms separated by N genetic jumps is, in big ‘O’ notation:

O( EXP(N)),

… an expression that entails a very large number. Thus, if it is not appreciated that some kind of underlying physical regime is required to reduce the size of this space in order to make it a tractable search problem, evolution can give the false impression of being a logically trivial outcome tantamount to “creation from nothing”. Thus, Hunter is probably right when he says:

Indeed, evolutionists insist that evolution must be true—a fact every bit as much as the fact that the earth is round, that the planets circle the sun, or even gravity. Yes, there must be no question about evolution. Religion drives science, and it matters.

When it finally does sink in that a working version of evolution (At least as far as cosmic durations are concerned) demands a very high level of probabilistic weighting, the first port of call for those who feel uncomfortable with a system “rigged” in favour of life is, of course, the multiverse. The multiverse spreads the butter of probability evenly over of a huge field of platonic possibilities and this makes it look as though there is no “somebody” out there who “cares” enough to “fix the figures”. The sheer size of the required multiverse is testament to the unimaginably enormous space of platonic possibility. For the multiverse must be big enough to generate enough random trials to eventually produce living configurations.

But even then the multiverse hits another problem: A system of laws that generates an enormous number of trails with no bias toward particular outcomes seems an incredibly powerful resource given that it is logically unwarranted; for it doesn’t answer the inevitable question about its own origin. Thus even an object as complex as the multiverse does not have the property of aseity.

Physics is a passively descriptive system: It provides the generating algorithms that embody information about the configuration of our cosmos. If we are looking for “causes” that go deeper than mere succinct descriptive mathematical entailments, physics will not deliver: Physics provides no answers to the question “why?” if that question is posed with motives that subliminally expect answers in terms of the intentional causes of a creative sentience. Thus, if we are not going to give ourselves the option of assuming that a highly complex sentient object with the property of aseity is an a priori and irreducible feature of existence we must accept that in the final analysis the flat descriptive world of physics, once it has done its job of compressing a full description of the cosmos in the smallest possible mathematical narrative, leaves us facing an impenetrable wall of descriptive brute fact. Therefore physics has no way of satisfying those who are nagged by a deeply felt curiosity arising out of the intuition that “causes” go much deeper than mere compressed description.

Wednesday, September 01, 2010

The Attack of the Internet Boids


No pranks, thanks: The 4chan user boids may swoop down on anyone, anytime.

I've just had my attention drawn to the 4chan web site phenomenon. Here is article on BBC news about it. It seems to be a web site that propagates a kind of organized trolling. One particular example of this “trolling in unison” that amuses me just a little bit is this one reported by the BBC:

Using the "Anonymous" persona, its tactics have included urging users to google the phrase "Scientology is a cult", pushing it to the top of Google Hot Trends, as well as staging real-world protests. In response, the Church of Scientology has labeled them "terrorists" guilty of "hate crimes".

In fact if you put “Scientology is....” into Google you get a recommendation with a general form of :

Scientology is (expletive deleted)

That should give the Scientologist lawyers a way to earn their crust. I wonder if they will dare to nobble Google and attempt to rig its recommendation algorithms?

However, chan4 users dish out mob justice and that can go over the top. The cruel women who was caught by CCTV dumping a cat into a wheelie bin was outed by the many eyes and ears of 4chan users and she has since more than paid for her crime. Also, amongst other pranks, YouTube has been flooded with pornographic videos by 4chan users acting in unison.

It’s instructive to compare 4chan with the readers (or “raiders” as I call them) of PZ Myers' blog, Pharyngula. At Myers prompting his many readers descend on some of the meaningless internet polls and give the poll creators a result they don’t want (but often the result they deserve). They also gave the hegemonic Real Catholics a nasty shock after PZ published Real Catholic, Michael Voris' video denunciation of democracy.

All in all it’s a lesson in system theory and self organization. We are probably dealing with a power law effect here: I suspect that the number of 4chan users dwarfs even PZ Myers readership; as one might expect if the distribution of web site user populations follows a power law. The users of 4chan are behaving like boids: The individuals follow a dynamic based on a relatively simple reaction to total group behavior and yet what emerges is single collective identity that has a modicum of intelligent behavior. Given that their eyes and ears are everywhere they are a force to be reckoned with. However, it’s a form of mob rule and their justice will be gut justice and not very discerning or even very just for that matter. Given the nature of this kind of mobsterism one can see why “An eye for an eye, a tooth for tooth…” actually advanced the cause of justice in the wild Middle East, circa 1250 BC.