<< Cancer cell growth simulation   |   ARCHIVES   |   Er, hello.... >>

Modelling or predicting the deity.

Instead of the usual 'some thoughts at Christmas' or year-end review, here's something about statistics, modelling and probability. My local church magazine recently had an article about Dr Stephen Unwin, author of The probability of God.

This book uses Bayesian statistical methods to suggest a 67% probability that God exists. The Reverend Thomas Bayes was an eighteenth century English Presbyterian minister, who first formulated a set of procedures for calculating the probability of an event taking place when you already have some information about what is happening. (Rather like the difference between betting on horses before the race starts, and just after it has started. Obviously, in many real-life cases, we are betting - sorry, applying probability techniques - when the race has already started. For instance, Bayesian methods are widely used to calculate variability indices - and hence option prices - for shares and derivatives which may have been traded for many years.)

Whilst Dr Unwin's maths may be OK, the problem is that the calculation, as he himself admits, "has a subjective element since it reflects my assessment of the evidence." To set up a Bayesian equation, you need hard numbers. To produce information about what has happened since the horses left the post, Dr Unwin constructs a scale of 'indicators' of how our world is doing. One is 'recognition of goodness': Dr Unwin assesses that we are ten times more likely to recognise goodness if God exists than if He does not. Another indicator is 'religious experiences': Dr Unwin believes that these are twice as likely to occur if God exists than if He does not. With some actual numbers (ten times, twice, etc) to go on, you can easily build up a Bayesian probability equation. The trouble is that the concepts are vague and difficult to define, and the numbers seem highly arbitrary, so the result is pretty meaningless too.

I was reminded of this whilst reading about the cosmological constant, defined by Roberto Trotta of Oxford University as "the dark energy that seems to be powering the observed accelerated expansion of the Universe". The problem for physicists is that the constant is very small compared to theoretical predictions. In other words, as Trotta puts it, the universe is "suspiciously supportive for life". He links to the 'anthropic coincidences' argument, advanced a few years ago by Professor Richard Swinburne, also of Oxford. This says that the universe has evolved in a way which is so suited to the growth of human life, and that this is either a very remarkable coincidence or proof of the existence of God, or some other sort of organising principle. (This view is not universally accepted.) Trotta's paper discusses the arguments that the number of observers of the universe (or possible observations) may allow us to deduce rather more about it. It's an abstruse argument, but he concludes that we don't know enough to discuss it anyway.

Reminds me of that other Oxford academic, Nick Bostrom, whose famous paper adressing the question, inspired by the film 'The Matrix', of whether we are living inside a computer simulation, came to the conclusion that this is highly unlikely, unless it is already happening without us realising it. I have to admit that as a Cambridge graduate I am not properly qualified to value Oxford logic.

The trouble really is that such fundamental 'proof' doesn't actually exist. Newton's laws, for instance, whilst they allow you to predict many things with considerable accuracy, are not actually true, just accurate within within certain limits. Einstein demonstrated this when he predicted (and observers later proved) that light could be bent by gravity. You can deduce that, if certain things are true, then certain others ought to follow, and you can look to see if they actually do. But even if they do, it doesn't actually prove anything. Eugene Wigner's 1960 paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences", argues that "the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it" - in other words, that maths may enable us to predict what will actually happen whe we do an experiment, but there is no reason why this always should be so.

One of the advantages of simulation is that it offers a new way to approach 'proof'. Modern techniques allow us to simulate (using mathematical models, I admit!) a large number of possibilities, and then compare the results. In effect, we can run simulated 'experiments' in social sciences or other areas, where physical experiment is simply not possible. The experimental results sometimes provide a critique of our assumptions, and never provide the 'truth', but it's a start.

As regular readers of this blog will know, my own pet theory is that massive databases and vast computing power will allow us to record such enormous amounts of data that we will be able to spot longer-term trends which have hitherto been invisible, rather like the telescope enabled Galileo to see the moons of Jupiter. A lot of ideas changed after Galileo's observations. At the moment, sadly, this activity is largely left to Police officers, who spend their time devising profiles of offenders and sifting through mobile phone company records to find statistically significant links between terrorist suspects.

I hasten to add that I'm not encouraging Dr Unwin or others to try to simulate a world with God and a world without, or to collect massive statisics about belief and its results.

That said, I'd rather live next door to Ned Flanders than Homer Simpson. (Matt 7, 16-21)


ADD YOUR COMMENT:



Notify me when someone replies to this post?



COMMENTS SO FAR:

Powered by pMachine