Election opinion polling in the UK doesn’t have a great record, with famously drastic poll upsets in 1992 (the ‘shy Tories’ year, although perhaps not actually) and 2015 (Milifandom, and the overprediction of the youth vote). The Brexit referendum result polling also has a bad rep, which is a bit undeserved, although the dire and almost-completely-ignored polling fails in 1997 and 2010 probably karmically compensate for it.

Polling errors in the UK have tended to assume that the sort of things I like will happen, and then they haven’t so much. Often, this involves Labour beating the Tories at things, but it also involves Liberal Democrat vote shares (in 2010, the LDs were predicted to outperform Labour in some polls, and instead lost votes and seats compared to 2005. This was forgotten because they entered government), bad referendums (well, one bad referendum; the Scottish results are a whole different story), and UKIP vote shares (sadly, UKIP have had vote shares).

The 2017 election, then, was an interesting test for UK pollsters. And they reacted interestingly. The polls featured the largest divergence I can remember, and possibly the largest divergence in a developed country since polling became more of a science than an entertaining novelty, with a 16-point variance in expected leads – and a 11-point variance in expected leads among respected, regular pollsters (this table shows the final pre-election poll):

The divergence, unlike previous years, wan’t primarily based on different methodologies for raw numbers. When you looked at the unadjusted numbers for pollsters such as ICM and Survation, they were almost identical. But – because of the 2015 failure – everyone was acutely aware that taking the raw voting intention numbers from Labour supporting groups (mostly the young, but also the poor) was a risky endeavour.

In 2015, young people and poor people said they preferred Labour to the Tories and that they were probably going to vote, and then didn’t. Old people and rich people said they preferred the Tories to Labour and that they were probably going to vote, and then did. Since the UK has voluntary voting and doesn’t have US-style voter exclusion, “whether people who say they are going to vote then go on to bother” was the huge problem for pollsters.

ICM (as did Ipsos MORI – the poll in the above table is unusually narrow by fluke, and their name scans, hence the title) assumed the best model for 2017 would be 2015, because young people are feckless.

Survation (as did YouGov – the poll in the above table is unusually wide for dumb reasons that I’ll go into later) assumed the best model for 2017 would be a combination of 2015, the Brexit vote in which about 60-65% of young people turned out, and changes in how voters have tended to express enthusiasm for candidates since 2015.

Opinium and the others in the middle went for, well, being in the middle.

Nate Silver, whose analysis of Trump’s potential was extremely good and who gets an unfair rap for it, summed up the pre-election position neatly (I also nicked his chart above, because hell, it’s not like I’m paid to write this), as about a 1/3 chance of each given how UK polls have gone in the past:

  • Narrow Conservative majority (Opinium)
  • Conservative landslide (ICM)
  • Hung parliament (Survation)

As ever, the very worse people alive were the pundits.

For Brexit and Trump, the pundits entirely ignored the fairly substantial polling possibilities that something dreadful might happen, viewing the within-margin-of-error possibilities that it might not as a certainty.

This time, however, the vast majority of pundits (left and right) writing on the UK election seized entirely on the ICM-style polls as certainties, smugly writing off the Survation and YouGov ones as ridiculous and grudgingly conceding that the Opinium ones might just about be correct but probably not.

Corbyn’s rise was genuinely surprising, I should note here. I said here in February, completely incorrectly, that his unprecedently high negative personal ratings would be a huge problem in leading a mass political party (in my defence: I wasn’t expecting Theresa May to be mad enough to call an general election, and in the by-elections and local-elections before the general campaign started, Labour’s performance was poor).

But by mid-May (the month May, fucking stupid words), it was clear that something good was going on – and Corbyn’s personal ratings had started to rise to match voting intent, whilst May’s plummetted. By the election they were more or less on a par, irrespective of polling house. And yet the pundits continued to assume that ICM were right and everyone else was silly.

YouGov, in a move they’ll always regret, threw in a couple of fishy adjustments (basically nailing down undecided voters to 2015 vote share) to their final pre-election poll, to take their prediction from a 3% Tory lead to a 7% one. Survation were the only hung-parliament polling house to honestly nail themselves to the mast and damn the consequences.

And now we know what happened.

Modelling young voter turnout did work better than assuming they were lazy dicks. Labour achieved their best possible outcome consistent with any kind of vaguely sane forecasting model. Survation will dine out on this one forever, and whichever high-up at YouGov insisted on rigging their final poll to bring it back into line with the median will feel like an idiot forever (or, more disappointingly, he won’t).

I posted on Facebook a couple of days ago that this election will either make UK polling incredibly boring as everyone clusters around previous election results, or incredibly interesting as it becomes clear that the new-ish players with smart models kicked the arses of the conventional pollsters. Even beyond the result, I’m absolutely delighted to see that the latter is the case.

Oh, and another interesting thing about this election is the highly accurate Bayesian seat-by-seat model that YouGov built separately from their national vote model and kept semi-hidden while everyone laughed at it. But that’s another story…

Post title by Daniel Davies, based on a traditional folk song whose variation Survation might legitimately be singing right now. 

Image: Myanmar election ballot paper. By Phyo

Leave a Reply

Your email address will not be published. Required fields are marked *