A few days before Eric Cantor lost his Virginia congressional district’s GOP nomination last year, his campaign touted the finding of an internal poll. It showed Cantor, the majority leader of the House, beating his challenger, Dave Brat, 62 percent to 28 percent. As good as this 34-point lead looked, its release signaled that Cantor sensed trouble. He was right to fret: On June 10, voters favored Brat by eleven points.
Cantor’s defeat marked one of the most startling upsets in recent political history, and official Washington, worried whenever an incumbent falls, wondered what went wrong. Had Cantor neglected his constituents? Had he irritated conservatives over immigration? Had late-deciding voters broken for Brat in the final hours? Whatever the factors, just about everybody agreed on one thing: The pundits didn’t see it coming.
“It was an aberration,” says John McLaughlin, the pollster who worked for Cantor. He blames the surprise on sabotage — i.e., Democrats who took advantage of Virginia’s open-primary laws to cast protest votes against a GOP heavyweight.
Whatever the causes of the blindsiding, the plain fact is that polling is getting harder, especially at the local level, and the supposed aberrations could become routine. Pollsters are scrambling to keep up with changes in technology and behavior that have rendered traditional survey methods obsolete. “We’re facing serious challenges,” says Scott Keeter of the Pew Research Center. Steve Mitchell, a longtime pollster in Michigan, is more dramatic: “I’m not sure I’ll be able to do this for more than two or three more election cycles,” he says. “We could be watching the death of polling.”
The birth of polling came in the 19th century, as newspapers tried to gauge popular sentiment by sending reporters into the streets, armed with questionnaires and tasked with collecting opinions. By the 1930s, George Gallup, Elmo Roper, and others had improved upon these primitive practices. They pioneered new approaches that relied on sampling and probability. Even so, they suffered a few remarkable failures — none more notorious than the one represented in the iconic photo of a grinning President Truman in 1948, when he held up a copy of the Chicago Tribune with its erroneous banner headline, “Dewey Defeats Truman.” As the historian Philip White points out in Whistle Stop, his recent book on the election, Gallup and Roper separately made a series of methodological mistakes. They also had become so convinced of Thomas E. Dewey’s inevitable victory that they stopped polling before the race was really over.
Yet they were smart enough to learn from their failures, and also to take advantage of the rise of the telephone, which made it possible for researchers to call just about anybody and produce almost completely random samples of the population. Their work began to look scientific, and polling entered a kind of golden era in the 1970s that mixed relatively low costs with reliable conclusions. Ever since, most political polls have gotten the right result not just most of the time, but the vast majority of the time.
Recent elections, however, have presented new challenges. In 2012, GOP presidential nominee Mitt Romney was certain that he would win. Anybody who doubts the sincerity of his conviction need only watch the opening scene of Mitt, the Netflix documentary on his campaign. “Does someone have a number for the president?” said Romney on Election Night, recognizing for the first time that he would have to concede. “Hadn’t thought about that.” The reason he hadn’t thought about it was that his pollsters had fed him lousy data — information that Romney and many of his supporters chose to believe, even as it contradicted other surveys that showed the race tipping to President Obama.
The polls of 2014 saw nothing quite so spectacular, and they did a fairly good job of picking winners and losers. Yet they commonly underestimated the strength of GOP candidates — a flaw on display perhaps most prominently in Virginia’s Senate race, in which Republican Ed Gillespie nearly ousted Mark Warner, the Democratic incumbent. Most polls had predicted a double-digit win for Warner, who in the end prevailed by less than a percentage point. Many Republicans wondered whether more-accurate polling would have boosted GOP turnout and generated a different outcome.
So what’s going on? Several factors have conspired in recent years to disrupt longstanding practices in public-opinion research, making it more difficult and expensive to arrive at reliable results. They can be summed up in two words: contact and cooperation.
The rise of cell phones has made it easier than ever to reach people, but it also has had the paradoxical effect of complicating the work of pollsters. That’s because close to half of all households have dropped their landlines, driving a wedge between phone numbers and places of residence. Pollsters no longer can look at area codes and make safe assumptions about where people live. It’s even harder among particular demographic groups, such as the young, the poor, and minorities, whose cell-phone use is even higher than average. Call-screening technologies and voicemail compound the problem by encouraging people to turn down callers they don’t know. Finally, a federal law prevents pollsters from auto-dialing cell phones — a consumer-protection measure that dates from when airtime was calculated in dollars per minute but is still on the books today. So reaching people on cell phones takes more effort, raising the costs of polling.
Even when pollsters connect with potential survey respondents, they face a new dilemma: a growing reluctance to cooperate. In 1997, according to the Pew Research Center, pollsters could count on initial conversations to turn into successful interviews more than one-third of the time. By 2012, this rate had dropped to less than 10 percent. So it takes many more calls — and deeper pockets — to yield the same results. “I used to do 300 interviews in races for the state legislature,” says Bill McInturff of Public Opinion Strategies. “Last year I was down to 250.”
A final complication for political polls involves timeliness. When pollsters conduct market research on brands and products, they can go into the field for weeks at a time, ask people what they think about Chevy and Ford or Coke and Pepsi, and emerge with trustworthy results. In politics, however, pollsters are often chasing the news and trying to spot daily trends. Clients want numbers overnight, whether they’re candidates trying to craft messages or media companies seeking a snapshot of a race following a major event. It’s even tougher when they have to burrow down to the level of legislative districts. “Pollsters are still pretty good at big public questions, such as the presidential approval rating,” says Karlyn Bowman of the American Enterprise Institute. “Local elections are getting a lot harder.”
Many pollsters are turning to the Web. The ordinary online poll, of course, is grossly unrepresentative, allowing anybody who stumbles on it to join in. Research firms, however, are striving to form large panels of survey participants whose information, when crunched the right way, provides true reflections of public opinion. “There’s a lot of trepidation because nobody knows the rules,” says Michael Link of the Nielsen Company. “The old style of polling developed over decades. Right now, we’re in the middle of a paradigm shift that’s only a few years old. We’re facing a steep learning curve.”
One experiment has shown promising results. SurveyMonkey, which helps companies, schools, and communities conduct polls of targeted audiences, regularly invites people who have finished one of its polls to take another. Using this piggybacking technique last fall, SurveyMonkey completed 135,000 interviews in the 45 states with contested races for the governorship or the U.S. Senate, keeping the results secret until after the election. Jon Cohen, a former pollster for the Washington Post, managed the project. As the results poured in, he saw something different from what the public polls were showing. “I spent October pulling out my hair, wondering why we had a Republican bias,” he says, referring to the fact that GOP candidates tended to do better in SurveyMonkey’s polls than in the polls reported by the press. Following the elections, SurveyMonkey released its results: Its polls had picked the winner in 69 of 72 races, missing only the contests for governor in Connecticut, Florida, and Maryland. Even more impressive, its predicted margins of victory were closer to the mark than those of just about everyone else, even though nobody had used a telephone.
Online polls enjoy other advantages. Over a landline, pollsters can ask people whether they think the United States is on the “right track” or the “wrong track.” On smartphones and computers, however, they can also test television advertisements, bumper stickers, campaign logos, and anything else that involves an image. John McLaughlin may have gotten burned in Eric Cantor’s primary last year, but he also helped Nathan Deal, the Republican governor of Georgia, win reelection. “We polled on the Web, so we were able to test nine ads with a panel of voters, who ranked them in order of preference,” says McLaughlin. “This helped us see what worked and what didn’t work, and to tailor messages to specific groups.” His method combined the breadth of a poll with the detail of a focus group, all in the service of winning votes.
Candidates who appear to trail their opponents have a favorite cliché: The only poll that matters is the one on Election Day. Good candidates always have known that this is at best a half-truth — and perhaps with the rise of new forms of polling, the rest will come to appreciate it as well.