As regular consumers of both mainstream and social media know, there is a vocal group of politically minded people who absolutely hate horse-race polling (i.e., polling about who is leading in election contests). They have varying reasons. Some think polls systemically underrepresent the viability of their favorite party or politicians. Others just dislike the hype surrounding poll findings and the phony conflicts over various numbers. And particularly among progressives, there are some who object to such polling because they feel the coverage it generates blots out the sky at the expense of the policy discussions that ought to be the focus of political media.
To all these poll-o-phobes, the recent emergence of self-doubt in the public-opinion industry based on polling errors in certain elections is a tiding of great comfort and joy. A particularly big moment came on November 4, after the gubernatorial elections in New Jersey and Virginia, when Monmouth University Polling Institute director Patrick Murray, deploring his own big “miss” in the New Jersey race, made this statement in an op-ed:
Public trust in political institutions and our fundamental democratic processes is abysmal. Honest missteps get conflated with “fake news” — a charge that has hit election polls in recent years …
Most public pollsters are committed to making sure our profession counters rather than deepens the pervasive cynicism in our society. We try to hold up a mirror that accurately shows us who we are. If election polling only serves to feed that cynicism, then it may be time to rethink the value of issuing horse race poll numbers as the electorate prepares to vote.
As Murray pointed out, two of the big guns in public opinion, Gallup and Pew Research Center, have already stopped polling candidate preferences, though they still poll on issues, presidential job approval, ideological views, partisan affiliation, and other horse-race-adjacent matters. And Murray’s freak-out over polling error in New Jersey reflected broader anxieties expressed within and beyond the polling industry over high-profile “misses” in the 2016 and 2020 presidential elections.
Now it’s important to note that polls were quite accurate in the 2018 midterms, and were also spot-on in the Virginia gubernatorial race that occurred the same day as New Jersey’s (in the final RealClearPolitics polling averages for Virginia, Glenn Youngkin led Terry McAuliffe by 1.7 percent. He won by 1.9 percent). And it’s easy to exaggerate the 2016 and 2020 errors. In the former election, the final RCP average projected a 3.3 percent Clinton lead over Trump. Her actual popular vote plurality was 2.1 percent. The margin of error was larger in 2020, but was a less-than-astronomical 2.7 percent (RCP averages showed Biden up 7.2 percent, and he won the popular vote by 4.5 percent).
The more crucial errors in both cases were in state polling, which (a) is generally less accurate than national polling, and (b) is less frequent. Yes, the chatter about Clinton and Biden’s big national leads based on national polling may have misled people who forgot there was this thing called the Electoral College that actually determines the presidency. But this goes to my fundamental problem with horse-race-polling abolitionism: Bad media coverage of political races won’t necessarily go away, or even improve, if you get rid of candidate-preference polls. Indeed, getting rid of the polls will likely create a vacuum which will be filled with partisan spin, leaked campaign poll results (believe me, the candidates aren’t going to deny themselves polling data), and “reporting” that harvests predictable, self-confirming “data” from tiny samples, conspiracy theories, and other misinformation.
FiveThirtyEight’s Galen Druke raised a lot of these and other concerns with Murray in a podcast interview this week. The more you listen to the back-and-forth, the more it becomes clear that Murray’s big fear is that the perception of pollster bias, fed by polling errors, is contributing to the loss of “public trust in political institutions and our fundamental democratic processes,” which he cited in his op-ed. This is a pretty clear allusion to the anti-democratic (and anti-Democratic) fallout reflected in heavy Republican subscription to the Big Lie about the 2020 elections. And it helps explain why Murray is upset about New Jersey but not Virginia, and about 2020 polling but not 2018 polling. The crisis, it seems, is that misleading (or more accurately, misinterpreted) polls are among the factors turning Republicans into authoritarians who won’t believe anyone other than Donald Trump.
It’s an understandable fear, and one that may particularly grip pollsters, who suspect a disproportionate refusal to participate in polls by Republicans is at the root of the 2020 polling “miss,” and perhaps others. Maybe not doing horse-race polls at all will keep the problem from getting worse.
There are, fortunately, remedies short of abolitionism that could help ameliorate the legitimate issues Murray and others have raised, without unnecessarily obscuring elections for political office in a data-free fog. Pollsters can more cautiously establish and publicize margins of error and what they mean. They can also simply refuse to conduct likely voter calculations — which Murray rightly suggests is the source of a lot of, or maybe most, polling error — relying on predefined samples like registered voters, or even the “all adults” samples typical of the job-approval and issues polls no one seems to find objectionable. Then pollsters could make it clear that they are not estimating turnout patterns, which might significantly reduce perceptions of bias.
Because misuse of polling data is probably the biggest problem of all, media outlets should be strongly encouraged to balance polling data with other kinds of political coverage, whether it’s on-the-ground campaign reporting, issues polling, or simply a focus on events remote from the campaign trail (e.g., actual governing activity in the three branches of government, and at the federal, state, and local level). And even in reporting polls, consumers of this data (including media) should absolutely look at averages, and warn that sparse polling of particular contests (which, ironically, voluntary decisions to stop horse-race polling by individual pollsters will exacerbate) is a danger sign in making predictions. It’s no accident that the New Jersey governor’s race featured less public polling than its counterpart in Virginia; similarly, the state polls that were off in 2016 and 2020 were, in most cases, conducted less frequently than national polling. Should there be any surprise that more polling means greater overall accuracy?
But make no mistake, there’s no silver bullet. As the deep skepticism over exit polls (sort of a combination of candidate-preference and issues polling) shows, non-horse-race polling has its own problems. There is a lot of “pure” issues polling out there that’s unreliable and downright biased, thanks to tricks of wording and question order (and a lot of it is commissioned by special interests promoting a particular point of view).
Some high-minded folks might ask a more fundamental question: What would we lose if we got rid of horse-race polling and instead did a lot more issues polling? My answer to that may infuriate such people, but it’s the truth: In our system, and especially with today’s extreme partisan polarization, who wins elections has much greater influence on policy outcomes than all the policy “debates” and public-opinion surveys you can devise. Politicians in both parties — and particularly Republicans, I would argue — routinely ignore issues polling in what they decide to do; ideology and pressure from donors and activists typically matters more, which is why Republicans won’t support even the most modest gun-safety measures, and Democrats won’t give government the prescription-drug-price negotiation powers the public has demanded for years. To put it another way, you can’t take the politics out of politics.
Polling of all sorts can and should be improved, and without question, we must do a better job of reporting and interpreting survey findings. But it’s folly to think that a reduction or an abolition of one type of polling is going to keep Republicans from believing Big Lies, or give politicians in both parties overpowering incentives to focus on policies rather than politics. In the end, the answer to flawed data is more, not less, data, with the kind of transparency and accountability we can’t get from private polls done for private purposes and then leaked and spun selectively. There is absolutely too much ignorance and lying and sheer darkness surrounding politics and government in this country, but you cannot stop it by turning out the lights.