Fifty-eight per cent of Britons between 16 and 75 believe that if you flip a coin twice, the probability of getting two heads is 50 per cent. Fifty-four per cent of Britons are “fairly confident” in their ability to use data and numbers.
If the Ipsos-Mori poll from which this is taken was accurate, it means that at least 12 per cent of Britons think they’re pretty good with numbers but can’t work out 0.5 times 0.5.
I’m not especially brilliant with numbers, data and statistics, in the scale of things, I should admit. But they are important. They’re the only tool we have for assessing the world dispassionately, for stripping it as far as we can of the colour of our own experience. Which is why the numbers given above are not, actually, the most depressing in the poll.
The most depressing numbers are the following. One thousand and thirty-four British adults between 16 and 75 were asked to choose between the following statements:
Statistics are more important than my own experiences or those of my family and friends in helping me keep track of how the government is doing
My own experiences or those of my family and friends are more important than statistics inhelping me keep track of how the government is doing
Forty-six per cent chose the latter. Just nine per cent chose the former.
But how can anyone, using their own experience and those of the, say, 150 probably fairly similar people they regularly come into contact with, possibly gain any sort of insight into the effects government reforms of the NHS, or benefit cuts, or whatever, are having across a nation of tens of millions of people? (Specifically, 56,075,912 in England and Wales, according to the 2011 Census. Statistic!) They are taking a pipetteful of water from the shore and declaring there are no fish in the sea.
The trouble is, of course, that people don’t trust statistics because other people use them to hide, rather than illuminate, the truth. Once you’ve been misled once, you’re less likely to trust people again.
This government has been particularly bad at that recently: the Education Secretary, Michael Gove, was caught out after claiming that “Survey after survey has revealed disturbing historical ignorance, with one teenager in five believing Winston Churchill was a fictional character while 58 per cent think Sherlock Holmes was real.” As Matthew Holehouse discusses elsewhere, his “survey after survey” was eventually revealed to be “research conducted by Premier Inn, the budget hotel chain, UKTV Gold and ‘an article by London Mums Magazine’.”
Before that, Iain Duncan Smith got himself in trouble – though not enough trouble, sadly – by using statistics to misrepresent the effect his proposed benefit reforms are having. “Already we have seen 8,000 people who would have been affected by the cap move into jobs,” he said. But the UK Statistics Authority watchdog pointed out, in no uncertain terms, that the numbers entirely failed to support his claim. The Financial Times caught one of his staff red-handed. He’d done something similar a few weeks before with his work programme providing training for unemployed people.
Nick Cohen has written an excellent and furious piece about IDS and his statistical misbehaviour, and argues that there is a wider problem in the government. He may well be right. But it’s not new – the last government did similar things (in fact, as my colleague Rob Colvile has just reminded me, the UKSA was set up to take the Office for National Statistics out of the last government’s hands), the one before as well, probably all of them. “Lies, damned lies, and statistics” was attributed to Disraeli. People have been lied to for too long, hence the feeble nine per cent.
This won’t change: politicians, and everyone else with an interest in it, will continue to mislead with statistics (“A 35 per cent reduction in the visible signs of ageing!”). But the dismissal of all statistics as “damned lies” ruins us all: without statistics, we wouldn’t have known that Mid Staffordshire’s death rate was above normal; we couldn’t tell what medical interventions work better; we couldn’t establish whether policing techniques work. The trick is to teach people the difference between “good” and “bad” statistics, to see why a large poll of a representative sample of the population is more reliable than a phone-in survey on UKTV Gold, to see how a longitudinal study or a randomised control trial of a drug is better than “I felt better afterwards”. People need to be able to understand why statistics are powerful and how they can be misused, but people who think that their own, parochial experience of the world tells them more than any statistics can, and base their voting and policy decisions accordingly, might as well be flipping a coin. And as we’ve seen, most of us can’t even get that right.