The strange case of the converging election polls

The pollsters have submitted their final judgements of public opinion before the election.

They’ve disagreed for months about how people say they will vote: less than a month ago two polls on the same day put the Tories on 39 and 33 respectively and Ukip on 7 and 15.

But now the final polls are in, the results are strikingly similar.

A quick analysis shows how the variance has collapsed between previous polls and this weeks’. Variance in pollsters’ scores for Ukip fell from 7.6 in mid-April to 3.4 now, while the variance for Labour fell from 4.7 to a tiny 0.8 now (all but one of the final polls put Labour on 33 or 34) (* methodology below).

 

They’re so similar, in fact, that it’s tempting to be sceptical. After months of polls that no-one could test, the polls converge on the day when they’ll be assessed against a real ballot of public opinion.

A pollster that got it completely wrong, when no-one else did, would look very silly. But one who gets it wrong when everyone else does? There’s nothing to single them out. The incentive for following the herd is clear.

I can think of several ways of rigging a poll to get the answers you want, though none seem easy or safe.

You could fiddle with the weights (including based on their 2010 vote), though that could be detected by poll nerds; you could change the criteria of who you select to question, though that would be a fairly crude tool for a single poll; you could even manually change some of the results after fieldwork to give the answers you want, though that’s so obviously fraudulent it would be a disaster for any pollster that got caught (if anyone wants to whistleblow drop me a line!).

There are other possible, legitimate, explanations.

Will Jennings at the University of Southampton suggested to me that voters in different pollsters’ samples might become more similar towards the end of the campaign. I suppose it could be the case that, eg, one pollster might happen to have more Ukip-inclined voters on their panel – and that as the election nears, some of those people change their minds. I can just about see how that could affect all pollsters’ panels in ways that make them more similar (eg those with more Tories find that some switch to other parties), but it seems contrived and I don’t really see why this would happen.

A similar explanation (perhaps that Will’s getting at) might be that lots of people just didn’t know how they’re going to vote until the last couple of days, so gave un-thought-out answers to pollsters until now. This could perhaps have caused the variance, which declined when people made up their mind.

But I see two problems with this explanation. Firstly, although we’re talking about people switching from one party to another, we should also see a sharp decline in undecideds from last week to this. While we have one, I don’t think it’s big enough to account for such a change (eg Survation’s undecideds were 12% on 1-2 May and 9.2% on 4-5 May).

And secondly, the explanation would produce large variation between every poll regardless of who conducted it – but what we saw was mostly a variation between pollsters and method of polling (eg online giving larger Ukip share and smaller Tory leads).

There was a debate about a similar trend in the US in 2008, which can be picked up here (h/t James Dobson). It offers lots of possible answers, some specific to that election, but none that satisfy me about what we’ve seen here. Some in that debate also suggest a deliberate ‘thumb on the scales’ from some pollsters.

In short, I don’t know how to explain the convergence of polls. I don’t want to assume some pollsters are cheating (and am not sure how they’d do it anyway), and the ‘resolution of undecideds’ makes intuitive sense – but I can’t quite make that data match the theory.

Would welcome anyone else’s suggestion.

 

Methodological note: For ease and speed I haven’t separated online from phone, which would make for a more in-depth look at what’s happened. I also haven’t compared variance within each agencies’ polls with those between agencies, which would help to address the ‘undecideds’ question. To account for the fact that some agencies produce multiple polls in a week (esp YouGov) I averaged each company’s polls. This has a downside that some use different methodologies – but I reckon it better this way than double-counting some agencies.

Share
  1. […] inquiry should also look at the converging of the final polls. If the polls had finished a week earlier, two of them (Ipsos MORI on 28/4 and Ashcroft on 26/4) […]