The pollsters have had a shocker. A calamitous, humiliating, sector-threatening humdinger of an epic fail.
An uncanny consensus that Labour and the Tories would be within one point of each other – a closely hung parliament, with Ed Miliband in Number 10 – was proved to be utterly wrong (only one pollster put a wider gap, and that had Labour 2 points ahead).
Ahead of this election, some commentators pointed to the errors pollsters made in predicting the ’92 election, suggesting that the polls could be wrong again. I didn’t take it too seriously for a few reasons: ’92 was caused in part by old census data, which wasn’t a problem now; this time the pollsters had taken into account the ‘shy Tory’ effect that caused the ’92 mistakes; and there are more pollsters around now to check one another’s results.
I was wrong, and so were the pollsters.
It’s important they realise how damaging this might be for the polling industry. As it stands, I don’t see why we should treat future election polls as more than a rough guide.
If that’s the case, why should journalists continue to pay for so many political polls?
Some pollsters seem to recognise this, like Stephan Shakespeare at YouGov:
But others, like Ipsos MORI, don’t appear to do so. In a statement, they’ve focused on what they got right (including their exit poll, which, to be fair, was excellent) as if that will divert us from the fact they called the election completely wrong.
I suggest the following approach from pollsters would be more productive:
- Acknowledge they got things completely wrong and that they’re disappointed in their performance.
- Set it in the context of how much pollsters usually get right, eg every major UK election after ’92 (broadly right, anyway).
- Show what they’re doing to fix it. The British Polling Council has announced an inquiry into the results: this is good news as long as it’s done well and agencies support it.
I’ve seen various possible explanations for the pollshambles, including lower-than-expected Labour turnout (though I don’t see why that couldn’t have been picked up by polls), and a fresh ‘shy Tory’ effect.
The inquiry should also look at the converging of the final polls. If the polls had finished a week earlier, two of them (Ipsos MORI on 28/4 and Ashcroft on 26/4) would have got the Labour-Tory gap pretty much right. Instead, they converged on the same answer. The fact this answer proved to be completely wrong makes me even more suspicious about the process behind this convergence.
Intriguingly, Damian Lyons Lowe at Survation has broken cover to say they suppressed a poll on the eve of the election that had nearly got the result right, as they didn’t want to be an outlier. I wonder whether any other agencies did the same – or tweaked results to fit with the pack.
Unless the pollsters show they’re on top of this, they may struggle to persuade people to take them seriously and commission polls from them in future.
Update 1: Andrew Hawkins at ComRes has joined Ipsos MORI in proclaiming how well his agency did. Not a good look, I suggest.
Update 2: Andrew Cooper of Populus has written in the FT about pollsters’ failure and the need to understand and explain what went wrong.
Update 3: This is, roughly speaking, how some of the pollsters are trying to put it:
And this is how everyone else sees it:
Update 4: Opinium have joined YouGov and Populus, as have ICM, in apologising for the wrong prediction, while the view is becoming established that polls in general can’t be trusted:
and perhaps it will strengthen Lord Foulkes’ efforts to regulate the polling industry: