Pages

Thursday, April 12, 2018

The Premium on Prediction

I have updated my short essay "The Premium on Prediction": This is an application of Bayes Theorem to the question of why a theory that is used to successfully predict outcomes enhances its chance of being right. This contrasts with theories which are retrospectively fitted to the data in hand with imaginative "dot" joining narratives, an activity which if achieved succinctly only amounts to a kind of data compression. The new edition of this essay can be found here. I have a feeling that this essay may be useful in the not too distant future as I look into related topics. Below I reproduce the introduction of this essay.

***

The following is a back of the envelope analysis that attempts to shed some light on why theories which make correct predictions enhance their chances of being right.
Before I proceed, however, I must add this caveat. It is not always possible to use our theoretical constructions in a way that makes predictions; historical theories in particular are not easy to test at will and sometimes we have little choice but to come to terms with the post-facto fitting of a theory to the data samples we have in hand. In fact with grand theories that attempt to embrace the whole of life with a world view synthesis, abduction and retrospective “best fit” analysis may be the only epistemic option available. If we are dealing with objects whose complexity and level of accessibility make prediction impossible, then this has much less to do with “bad science” than it has with an ontology which is not readily amenable to the scientific epistemic. However, in this post I'm going to look at the case where predictive testing is assumed to be possible and show why there is a scientific premium on it. To this end I'm going to use a simple illustrative model: Credit card numbers.
Imagine that valid credit card numbers are created with an algorithm that generates a very small fraction of the numbers available to, say, a twenty digit string. Let us imagine that someone claims to know this algorithm. This person’s claim could be put to the test by asking him/her to predict a valid credit card number, or better a series of numbers. If this person repeatedly gets the prediction right then we will intuitively feel that (s)he is likely to be in the know. But why do we feel that? Is there a sound basis for this feeling?
I’m going to use Bayes' theorem to see if it throws any light on the result we are expecting – that is, that there is a probabilistic mathematical basis for the intuition that a set of correct predictions increases the likelihood that we are dealing with an agent who knows the valid set of numbers; or rather the algorithm that generates them.
In this paper entitled “Bayes Theorem and God” I derived Bayes' theorem from a frequentist concept of probability and then went on to consider an example taken from the book “Reason and Faith” by Forster and Marsden where they use Bayes' theorem to derive the probability of God. As I remarked in “Bayes Theorem and God” there are certainly issues with the interpretation of the terms used by Forster and Marsden, issues which compromised the meaningfulness of their result. However, although the problem addressed in this paper is isomorphic with F&M's “probability of God” calculation, in this more mundane application of Bayes' theorem the terms are less cloudy in meaning. Both the Venn diagram and the mathematics used in my previous paper on Bayes and God can be taken off the peg and Forster and Marsden’s terms reinterpreted.

No comments:

Post a Comment