[ooyala id=”dvczZmNzq2nhbvkBtS_xLpTIMlXJYeYh” ]
This morning everyone’s very down on the Office for Budget Responsibility and their economic forecasts. George Osborne relied on them heavily in his Autumn Statement yesterday, using their predictions for growth up to 2017-2018. But at the same time, the OBR itself admitted that its predictions for this year were wrong, again.
Its growth predictions for 2012, made in 2010, were originally 2.8 per cent. In March they were revised to 0.8 per cent. Yesterday, they were downgraded yet again, to -0.1 per cent. Their original estimate, then, was wildly off the mark, even before we take into account the (let’s face it) near certainty that that -0.1 per cent figure will be revised again. The 2013 growth, meanwhile, has been revised from 2 per cent to 1.2 per cent.
But actually, the OBR, by the standards of economic forecasters, aren’t that bad. They at least make it clear that their predictions are highly uncertain. However, the whole concept of prediction of the economy, as currently practiced, and the use of those predictions by politicians, is badly flawed.
I know I go on about Nate Silver regularly, so apologies if you’re bored, but there’s a line in his book The Signal and the Noise which is relevant. Not only do economists regularly fail to predict the future, most of the time they can’t even “predict” the present: “a majority of economists did not think we were in [a recession] when the three most recent recessions, in 1990, 2001, and 2007, were later determined to have begun.” And yet we still hang on the words of forecasters when they tell us what growth to expect in five years’ time.
It’s an odd thing: we think of weather forecasts as paradigmatically unreliable, but actually they’re pretty good. Meteorologists’ ability to accurately predict, say, the course of a hurricane has improved hugely in the last 30 years, meaning they can usually predict where it will land to within 100 miles three days in advance – which might not sound great, but it allows for realistic evacuation planning. More prosaically, when they say there’s a 60 per cent chance of rain tomorrow, they’re right almost exactly 60 per cent of the time. They have achieved that partly through increasingly complex computer models, but partly by, simply, going back and checking how their old predictions went: they look at all their previous “60 per cent chance of rain” claims, and if it rained on 75 per cent of those days, then they know they’ve got it wrong (and they know their limitations: any meteorologist making predictions of weather, as opposed to climate, more than about a week in advance is having you on). That’s called calibration.
As Silver points out in his book, economic predictions are rather different. He looked at “90 per cent confidence” predictions made by the Survey of Professional Forecasters for the 18 years 1993-2010. They should have been correct on about 16 of those years. In fact they were right just 12 times: their 10 per cent long shots came in 33 per cent of the time. Their calibration was off by more than 200 per cent. Other studies have been even more damning; economic forecasters are systematically overconfident.
The trouble for economists is that people, and more especially politicians, hate uncertainty. Hence not only the systematic overstatement of a prediction’s likelihood, but also the tendency to give single-figure predictions. It’s like, if I’m allowed to crib off Silver again, asking someone to predict the sum of two fair dice. The most likely result is of course seven, but predicting seven would be silly, because it’s still very unlikely. You should say “There’s a one-in-six chance of a seven, a five-in-36 chance of a six (or an eight), a one-in-nine chance of a five (or a nine)” and so on, down to a one-in-36 chance of a two (or a 12). But when it comes to economic forecasts, we demand a single, confident figure.
In fairness to the OBR, apart from its hostage-to-fortune headline figures (“Our central forecast is for the economy to grow by 2 per cent in 2014, 2.3 percent in 2015, 2.7 per cent in 2016 and 2.8 per cent in 2017”), they do show the uncertainty. In each forecast they provide a “fan chart”, giving bands of probability for where the economy will end up. So its “central estimate” for 2017 may be 2.8 per cent, but there’s about a 10 per cent likelihood of spectacular above-five-per-cent growth (and about the same of miserable stagnation at below -0.1 per cent). The lower-than-expected growth we’ve had is within the 80 per cent bands of their November 2011 and March 2012 forecasts; towards the low end, to be sure, but within it*. But since the band is so broad, covering everything from triumph to disaster, that’s hardly surprising.
In an ideal world, George Osborne would have stood up yesterday and said “To be honest, the economy could be doing pretty much anything in 2017; these OBR figures are at best a weak guide and I would be an idiot to base any major policy decisions on an estimate that is only fractionally more reliable than reading tea leaves.” But politicians don’t like to do that. So we get told once again with great confidence that in 2017 we will have 2.8 per cent growth, even though everyone involved knows we won’t. It’s slightly less ridiculous than telling us whether it’ll snow that Christmas, but only slightly.
* HUGELY NERDY STATTO FOOTNOTE: What I can’t do is work out how good their calibration is: the OBR has only made six sets of predictions, so there’s not much data to work with, and I’m not much of a statistician anyway. However, a very brief look shows that of the predictions of annual growth (made before the relevant year ended) we can see that all but two of the 11 have been overoptimistic and only two have been under. The OBR claims that its central forecasts are the median: ie that there is equal chance of their being too optimistic or too pessimistic. Crudely speaking, they say that there’s a 50 per cent chance of above-predicted growth. That means the odds should be the same as a fair coin toss. The odds of getting just two heads from 11 fair coin tosses is about 2.7 per cent.
Now, we should remember that it would be possible for them to miss low consistently, which would be just as likely and just as surprising, so we should really add together the probabilities of either just two heads or just two tails: about 5.4 per cent. Not knowing anything else, we would say that this sort of consistent pattern of error should happen about one time in 20. It’s perfectly possible that it’s just the workings of chance. (It falls, incidentally, just outside the arbitrary line of “statistical significance”.)
But if that pattern continues into the future, it would be fair to say that the OBR is systematically bullish about the economy, and that their calibration (the idea that 50 per cent will miss high and 50 per cent low) is off. Since the tendency is for economic forecasters to calibrate their predictions badly, it seems reasonable to treat the OBR’s assessment of their error rates with significant caution.
OBR growth prediction, %/Actual growth (World Bank), %/direction of error
OBR growth prediction, %/Actual growth (World Bank), %/direction of error
OBR growth prediction, %/Actual growth (World Bank), % (using the OBR’s most recent prediction in place of “actual” and, obviously, not using their most recent prediction as a prediction since it would be right by definition)/direction of error
Thanks to James Plunkett on Twitter for helping me get these figures.