Politics & Policy

Polls Don’t Measure What You Think They Measure

(Photo: Madrabothair/Dreamstime)
Gimmicky polls tell us only one thing — and it’s one thing that we know already: America is bitterly divided on political lines.

Another stupid poll has been making the rounds. Two professors, Ariel Malka and Yphtach Lelkes, claim to have found that 52 percent of Republicans would support postponing the 2020 election “until the country can make sure only eligible American citizens can vote.” Within a few hours the result was everywhere, retweeted from Dan Drezner to Keith Olbermann to Evan McMullin, and featured in The Hill, New York magazine, and The Week, among many others. This is unfortunate.

For starters, the poll just wasn’t very good. It began by asking questions about illegal immigration and voter fraud, thus priming conservative respondents to be sensitive to those issues. And, as Dan McLaughlin points out, we have little context with which to interpret the results, since the pollsters didn’t ask Democrats or independents similar questions. For instance, imagine a similar poll in December 2016, asking whether respondents would support delaying Trump’s inauguration until “the country can make sure there was no Russian hacking of votes in the election.” It doesn’t seem terribly difficult to imagine that such a question would obtain similar results among Democratic respondents as this poll did among Republican respondents.

But there is a broader problem, one that has to do with how we interpret polls more generally and one that should be on our mind whenever we’re confronted with a finding like this: Polls don’t track people’s actual beliefs. They track people’s stated beliefs.

When it comes to elections, this generally doesn’t matter, because there usually isn’t too much daylight between how people will anonymously tell a pollster they intend to vote and how they actually vote. For instance, the evidence suggests that people were largely truthful to pollsters about their voting preferences in the 2016 election. And despite widespread concern about a “shy Tory” effect in the United Kingdom that would cause conservative voters to misrepresent their intentions due to anti-Tory stigma, it seems likelier that British pollsters are simply quite inaccurate. Likewise, Marine Le Pen did worse in the recent French election than the polls indicated she would, which is the opposite of the effect you would expect if some voters lied about their intentions to conform with the socially acceptable stance. Beyond elections, polls are trustworthy in other domains too: There’s no reason to believe that respondents would lie to pollsters about whether they approve of certain politicians, certain political parties, or, most of the time, what their values are.

These questions work because they’re well defined, binary, and because respondents are generally comfortable with the answers. There isn’t much ambiguity in a choice between Clinton and Trump, and most polltakers will know where they stand, even if where they stand is “undecided.” These same principles generally hold if you ask people whether they consider themselves to be pro-choice or pro-life, or liberal or conservative, whether they prefer Democrats or Republicans, or whether they’re satisfied with their job. Questions of these sorts will generally produce reliable answers, subject to the universal constraints of polling.

But problems start to multiply when you get far beyond these sorts of queries. Most well known is the phenomenon of social-desirability bias, where survey respondents will tend to answer questions in ways that would make themselves look better. One rather amusing example of this: Men between the ages of 25 and 44 report having had an average of 6.1 opposite-sex sexual partners; women in the same age group report an average of 4.2 partners. Supposing that there are roughly an equal number of sexually active men and women, this strongly suggests that someone is lying. Another example: The South is generally considered to be the American region with the greatest obesity problem, using self-reported data for height and weight. But a University of Alabama study used “direct measures” — weighing people — and found that the upper Midwest actually had higher obesity rates. The South may simply have less of a stigma against obesity, encouraging relatively honest self-reporting.

But there’s more going on than just social-desirability bias. Consider church attendance. Most polls suggest that about 40 percent of the American population attends church, synagogue, or mosque once a week or more. That figure has been roughly stable since at least the late 1930s, reaching a high of 49 percent in the mid 1950s, and a low of 36 percent today (although it reached 37 percent in 1940, and 38 percent in the mid 1990s). Yet, over the last 50 years, a number of denominations have been losing members, and a 1993 study found that actual attendance rates were about half the self-reported numbers for both Protestants and Catholics. The study tried pinning this on social-desirability bias, but this doesn’t really make sense. Views on church attendance were far more conservative in the 1940s and the 1950s than in the 1990s, yet it seems that more people were misrepresenting their church habits in the 1990s, which runs exactly counter to the expected trend.

One more plausible way of interpreting the data is in the context of polls suggesting that religious belief has remained relatively consistent over time. For instance, as many Americans believe in heaven now as did in 1968, and more believe in hell, while the proportion of America that says religion is very important has remained pretty stable over the last 25 years. These figures seem relatively trustworthy (although it is worth noting that most surveys suggest the number of atheists is rising, and that these probably do underestimate the number of atheists). So perhaps what’s going on is that respondents interpret the question about regular church attendance as a question about religious belief — that is, they answer “yes” if they feel like they’re the type of person that goes to church once a week. This isn’t that different from social desirability, since it involves people answering questions in such a way as to align with the position they find more desirable. But instead of people answering poll questions to signal their conformity with broader social values, they answer poll questions to signal ideological alignments.

There’s a catch that most people don’t know about.

As you might imagine, this is magnified considerably in politics, where tribal associations are stronger than in almost any other societal domain. One simple example: Just as it is difficult to get a man to understand something if his job depends on his not knowing it, it seems to be quite difficult to get a man to believe something if it is inconsistent with his politics. Democrats believe that inflation rose under Reagan; Republicans believe that the deficit rose under Clinton — and so on. And for a long time, that was considered to be the whole story: Democrats and Republicans were walling themselves off in separate realities by cultivating mistruths that they found personally satisfying. Many people believe this today.

But there’s a catch that most people don’t know about. If you pay respondents for answering questions correctly, the partisan gap decreases substantially. The problem, then, isn’t that partisans are walling themselves off from reality. They’re aware of reality but they choose to neglect it in favor of partisan signaling unless there are some actual stakes. This helps explain a few other mysteries: It was never quite credible, for instance, that 13 percent of the American electorate believed Obama was the antichrist, or that a plurality of Republicans believed he was born outside the United States. The much more likely hypothesis is that a certain number of Americans disliked him enough to agree with whatever ludicrous statements made Obama seem particularly bad. Another example: PPP notoriously found that 30 percent of the Republican base supported bombing Agrabah, the fictional homeland of Aladdin (57 percent were not sure, which is probably the best possible answer you can give about our policy toward non-existing polities). Meanwhile, 36 percent of the Democratic base opposed War on Aladdin, and 45 percent were unsure. Many people noted that these results suggested a disturbing but not entirely surprising ignorance on the part of the American electorate. But the reflexive partisanship is no less remarkable. When given a lunatic question about which it is possible to have no intelligent viewpoint, substantial minorities of Republicans and Democrats alike churned it through their ideological filters and came up with the response that most corresponded to their partisan allegiances.

We should stop taking these sorts of polls seriously. They teach us only one thing — and it’s one thing that we know already: America is bitterly divided on political lines. Ironically, it is exactly this sort of national split that gimmicky polls exploit to convince partisans that the opposite tribe is as stupid and as evil as they’ve always expected. And so things get worse and the cycle continues.

READ MORE:

Immigration is about Values, Not Economics

Smartphones & Millennials

The Case for Diversity in Immigration

Max BloomMax Bloom is an editorial intern at National Review and a student of mathematics and English literature at the University of Chicago.
Exit mobile version