poll position

Should We Stop Paying Attention to Election Polls?

Are polls more trouble than they are worth? Photo-Illustration: Megan Paetzhold. Photos: Getty Images

As votes continue to drift in, everyone other than Donald Trump and his sycophants has shifted to interpreting what happened rather than suggesting it’s a mystery or a crime. And already, a debate is developing over what is looking to be a sizable pattern of polling errors, and what they mean.

To be clear, the gap between polls and voting results may partially be explained by late trends that polls simply could not capture, not just methodological “error.” And it’s also important to remember all the votes aren’t in: for example, Joe Biden’s current national popular vote lead of 3.1 percent, a major source of the “scandal” of 2020 polling error, may swell to a less embarrassingly small level (the other day Nate Silver projected it will eventually climb to 4.7 percent).

Still, when you look at Election 2020 from top to bottom, there’s no question most polling-based prognostications were off more than a bit. FiveThirtyEight’s final polling averages in the presidential race gave Biden a 8.4 percent lead, and moreover, a lead in several states (notably Florida and North Carolina) Trump won, and a decent chance of winning several other states (e.g., Iowa, Ohio, and Texas) Trump won by large margins. Polling errors were even larger in Senate races, and the results in the House and in state legislative contests defied all expert expectations. For the most part, all these mistakes underestimated Republican performance, so understanding them isn’t just a matter of academic interest, but of institutional credibility at a time when conservatives already deeply mistrust mainstream media and the political content they produce or sponsor (including most polls).

At this fragile moment for the public opinion and media industries comes one of the most prominent media pollsters, the New York Times’s Nate Cohn, with a cri de coeur over apparent 2020 polling errors, via an interview by The New Yorker’s Isaac Chotiner. Here’s a sample:

[T]his was a much bigger polling miss, in important ways, than in 2016. It was a bigger polling miss in the national surveys. It was a bigger polling miss for the industry’s most prominent and pricey survey houses. The state polling error will be just as bad, even though … many state pollsters took steps to increase the number of white voters without a degree in their surveys. And state polls look a lot like they did in 2016.


But, if the state polls are just as bad as they were in 2016, despite steps that we know improved the President’s standing in the surveys, we can say with total confidence — and I know this was true in our data — that the underlying survey data has to be worse than it was in 2016. Or, if you prefer, if all the pollsters were using the 2016 methodology, the polls would have been far worse this year than they were in 2016. 

Cohn sorts through a variety of theories for why this is so: the “non-response bias” of Trump voters who don’t trust pollsters or their media sponsors; supercharged political engagement by liberals leading them to respond to polls disproportionately; mistakes in “likely voter” screens in a high-turnout election; and even the diverging reactions of Democrats and Republicans to COVID-19 risks in voting in person. In the end, though, he rather shockingly throws up his hands and suggests that polling might provide less information — or more information — than it’s worth:

[T]elling the stories of these elections accurately has a huge effect on the course of politics in our country. And the polls in this election did not tell that story accurately. They said that Joe Biden was doing way better among white voters than Hillary Clinton. They were wrong about that. He did somewhat better, but the final polls from some of our most prominent survey houses had an all-but-tied race among white voters nationwide. It didn’t happen.


And if you can’t tell the story of an election at the end of it, then the democratic process has some serious problems. Because, in a democracy, politicians need to reflect the will of the electorate, and if you cannot do a good job of interpreting the will of the electorate at any given time, our politicians won’t either. And you end up in a position where the public may not be happy with what politicians try and do on their behalf. And so I think it’s a serious problem that the polls were wrong to the extent that they were this year.

In a new episode of the Times’ podcast The Daily, Cohn bluntly suggests that polling in both 2016 and 2020 was in significant ways “counter-productive,” and may be so to an extent that is not fixable.

Coming from someone so intimately connected with political polling as Nate Cohn, this sort of despairing pessimism about the whole enterprise is, of course, music to the ears of those who have always disparaged polling or the “horse-race media coverage” it enables. Unsurprisingly, the other Nate of data journalism, Mr. Silver, responded pretty quickly:

And Nate Cohn himself is quick to admit that alternatives to polling are (to put it mildly) unsatisfying:

I don’t think there is a good alternative to public polling. I don’t think we have other ways to measure the attitudes of a really diverse country. I think that without polling we would mainly consider the views of ourselves and our neighbors and our like-minded friends. And so we need tools to reach out to people who are very different from us in order to understand our country well. I don’t think that on-the-ground reporting cuts it. Face-to-face polling exists, and it doesn’t work all that much better anyway.

That’s a good and important point. There was tons of anecdotal, on-the-ground reporting this election year suggesting that former Trump voters were disgruntled with the president. Yet if anything he improved his performance among his white working-class rural-small-town-exurban “base.” And for that matter, the results defied history and even common sense as much as they diverged from horse-race polling. Presidents as tangibly and consistently unpopular as Trump don’t generally come as close to reelection as he did in 2020. Parties that lose a presidential election don’t generally do as well as Republicans did in congressional and state legislative races this year. Georgia has gone Democratic exactly once since 1980. A surge among Latino voters was not a predictable trend for the most nativist president since at least Calvin Coolidge. (Though it was in fact predicted to some extent by the much-derided polls.)

It’s also not at all clear that political polling affects elections themselves. Did all those Democratic primary voters this year really deem Joe Biden the “most electable” rival to Donald Trump because they pored over CNN or Monmouth or Times-Siena horse-race surveys? Or was that judgment attributable to long-standing stereotypes about the superior electability of centrists?

Personally, it’s an article of faith for me that the answer to faulty data is more data, not throwing it all out because it’s possible to exaggerate the conclusiveness or misinterpret the nuances of the information we have. But it’s possible that the sheer abundance of political polls has led to less reliable research, as Cohn suggests in the Chotiner interview:

I have long been of the view that media organizations should coöperate on polls rather than everyone having their own polls. We do it for the exit polls. And we don’t need to have six different national surveys a month coming out from each of the major news organizations saying more or less the same thing, with most of the deviation from the average being attributable to noise, not meaningful methodological choices. And I also feel like that would facilitate better analysis, and get larger sub samples, and so on. 

This last point is worth pondering. Pollsters in large part “missed” the Latino vote story — or at least the magnitude of it — in part because of sub-samples too small to accurately measure Latino public opinion and research techniques too puny to reach a sufficient number of Latinos to begin with. Maybe we need better polling rather than more or less of it. Because without some set of objective data, however fallible, election coverage in the news media is going to become truly unreliable and partisan — a field for pure spin and for “shoe-leather reporters” who go out and find exactly what they are looking for and nothing else. And that would take today’s polarized understanding of public affairs and make of it a wall that nothing could penetrate other than brute force.

Should We Stop Paying Attention to Election Polls?