Less than a week from the deal going down, polls are showing an insanely close race between Donald Trump and Kamala Harris, both nationally and in the seven battleground states. Lately, it has been hard to find polls showing much of anything else. And that has led to suspicions, as often happens in the homestretch, that pollsters are “herding,” i.e., aligning their numbers as close as possible to those of other pollsters, as Nate Silver put it:
Silver has been focused on herding for some time. When he was still running FiveThirtyEight, he offered an explanation for why some pollsters herd:
One further complication is “herding,” or the tendency for polls to produce very similar results to other polls, especially toward the end of a campaign. A methodologically inferior pollster may be posting superficially good results by manipulating its polls to match those of the stronger polling firms. If left to its own devices — without stronger polls to guide it — it might not do so well. When we looked at Senate polls from 2006 to 2013, we found that methodologically poor pollsters improve their accuracy by roughly 2 points when there are also strong polls in the field.
To put it another way, nobody wants the embarrassment of publishing a final preelection poll that turns out to be a complete outlier. Perhaps that’s why some of the worst outliers are actually produced by “stronger polling firms” that don’t worry about their reputation for accuracy, such as the New York Times–Siena outfit that Silver singles out as honest with its data. Times–Siena had Joe Biden up by nine points among likely voters in its final mid-October 2020 poll (Biden won the national popular vote by 4.5 percent). Worse yet, Times–Siena had a late-October poll four years ago showing Biden leading by 11 points in Wisconsin, the tipping-point state he actually won by 0.7 percent.
This very recent phenomenon creates something of a philosophical question: If polls turn out to be misleading, is the greater culprit the high-quality pollster who publishes an outlier survey or the lower-quality pollsters who “herd” in the same direction? It’s hard to say. An underlying question is how does the “herding” happen, assuming pollsters don’t just look around and force their numbers to emulate everyone else’s? Earlier this week, political scientist Josh Clinton explained how the decisions all pollsters have to make about their samples can vastly change their findings without any sort of overt tampering with the bottom line:
After poll data are collected, pollsters must assess whether they need to adjust or “weight” the data to address the very real possibility that the people who took the poll differ from those who did not. This involves answering four questions:
1. Do respondents match the electorate demographically in terms of sex, age, education, race, etc.? (This was a problem in 2016.)
2. Do respondents match the electorate politically after the sample is adjusted by demographic factors? (This was the problem in 2020.)
3. Which respondents will vote?
4. Should the pollster trust the data?
Clinton goes on to demonstrate that pollsters’ answers to these questions can produce as much as an eight-point variation in the horse-race results. That 2024 general-election polls are not actually showing a huge amount of variation is probably the best evidence that pollsters are answering these questions as a “herd” even if they aren’t putting a thumb on the scales for Trump or Harris.
We can’t know if the herd was right or wrong until after the election, but in the pollsters’ defense, they have mostly been working hard to address the issues that produced big errors in state polls in 2016 and in national and state polls in 2020. (They were, after all, very accurate in 2022.) Still, as Clinton argues in a more recent piece, the error could be the same this year and be shared more systematically:
The fact that so many swing state polls are reporting similar close margins is a problem because it raises questions as to whether the polls are tied in these races because of voters or pollsters. Is 2024 going to be as close as 2020 because our politics are stable, or do the polls in 2024 only look like the results of 2020 because of the decisions that state pollsters are making? The fact that the polls seem more tightly bunched than what we would expect in a perfect polling world raises serious questions about the second scenario.
Putting aside the polls for a moment, anxious pundits and supporters of both presidential candidates are understandably looking for signs that could indicate a close election breaking one way or the other at the last minute. Some in both camps are obsessed with the fool’s gold of early-voting data; given the massive imponderables associated with determining who these people are over time and whether their “banked” votes would have been cast later, you can use early voting to “prove” whatever you want. Others are obsessed with subjective indicia of “enthusiasm,” which may matter but only by how far it extends beyond the certain-to-vote and is infectious (an “unenthusiastic” vote counts exactly the same as an “enthusiastic” vote). A more relevant factor is the scope and effectiveness of last-minute ads and voter-mobilization efforts, but the former tend to cancel each other out and the latter are generally too submerged to weigh with any degree of certainty.
Finally, some observers put stock in late trends involving the objective condition of the country, particularly improvements in macroeconomic data. There are two problems with that approach: First, perceptions of the economy tend to be baked in well before Election Day, and second, current voter perceptions of all sorts of phenomena have relatively little to do with objective evidence. Big chunks of the electorate believe against the evidence, for example, that the economy is horrible and getting worse, that we are in the middle of a national crime wave, and that millions of undocumented immigrants are pouring into heartland communities to do crimes and vote illegally. This isn’t an environment in which many voters are anxiously examining statistics to see how America is doing.
If the polls do turn out to be off significantly, we will almost certainly see a wave of postelection know-nothingism in which angry or frustrated people argue that we should throw away all objective indicators of how an election is unfolding and instead rely on vibes, “gut impulses,” and our own prejudices. I hope that doesn’t happen. Imperfect as they are, polls (and, for that matter, economic indicators and crime or immigration statistics) are a lot better than relying on cynical partisan hype, spin, and disinformation, all of which tend to be self-perpetuating as they are given credence. And as we already know — and may be reminded on November 5 — it’s a short jump from rejecting polls to rejecting actual election results. And that way lies another January 6, or something worse.