Talking Points Memo had a piece last night, noting that NBC (the mainstream media!) has included an “unskewed” analysis of their Ohio poll:
NBC News, anticipating backlash over polls released Saturday that showed President Obama ahead in two crucial swing states, attempted to head off conservative criticism that the surveys sampled too many Democrats by preemptively “unskewing” its own numbers.
The polls, released jointly by NBC News, the Wall Street Journal, and Marist College, showed Obama leading Republican challenger Mitt Romney by six points in Ohio and two points in Florida. But in the Ohio poll, Democrats held a nine-point party identification advantage — a figure sure to draw conservative ire. (In Florida, Democrats held a two-point advantage.)
So as part of a story online about the polls on Saturday, NBC News senior political editor Mark Murray included a section essentially imagining if the findings of his organization’s own poll were altered in a way that showed the Democratic sample in Ohio cut in half. In that case, Murray wrote, Obama’s lead would shrink to three points.
Criticizing pollsters for allegedly oversampling Democrats has become a cottage industry on the right. Over at National Review Online, Josh Jordan on Saturday referred to Marist’s polling as an “in-kind contribution to Obama.”
Now Murray told TPM that NBC remains convinced that the Marist poll (with its Democrat +9 advantage) is right: “The numbers are the numbers, and we stand by them.”
But TPM’s piece is just the latest example of those on the center and on the left laughing at those on the right “unskewing” the polls or questioning the partisan breakdown in the polling sample.
So here’s my take: I don’t know what the partisan breakdown is going to be election night, including in Ohio. (If it is plus 9 Democrat, kudos to Marist for excellent polling.) But it’s unfair to conservatives to simply mock them for questioning the partisan breakdown of polls. There’s nothing dumb, unscientific, or risible about questioning polls’ partisan samples.
Here’s why: pollsters regularly make sure they have set percentages of certain demographics. A pollster will assume for instance, the electorate will be between X and Y percent Asian, between X and Y percent from this region of the country, between X and Y percent this income level, etc. But these are all guesses. The electorate regularly changes – something the Obama campaign, which touts how many women, Latinos, and young adults they’ve registered to vote, is very much aware of. Depending on who registers and who doesn’t, and who bothers to vote and who doesn’t, the final electorate could change significantly.
Now what pollsters don’t do for the most part is make sure they have set percentages of Republicans, Democrats, and independents. Instead, most pollsters view the partisan breakdown like they view the presidential candidate support numbers: something the poll is revealing, not setting beforehand.
But here’s the catch: a partisan breakdown is only as reliable as the polling behind it. So what if the set demographics are skewed? What if a pollster is interviewing too many (or too few) cell phone respondents? What if a pollster is assuming a higher percentage of Latinos will vote than ultimately will? What if one region of the country is going to vote in larger numbers than the pollster is assuming? What if the fact that fewer and fewer people will bother answering a poll is also throwing off the accuracy of polls?
These are all reasonable, fair questions. What conservatives are saying is these partisan breakdowns often seem way out of line from other data we have. We know Republican enthusiasm is up from 2008, and we know Democrat enthusiasm is down. We know that 2008 was a crazy anomaly; that since the 1980s, Democrats have never had more than a 4-point advantage in a presidential or mid-term election with the exception of 2008, when Obama coasted to victory on a Democrat plus 7 advantage. So conservatives look at all these factors and say: Yes, it looks strange that many of the polls are assuming the electorate will look identical (or in some cases, with even more of a Democratic advantage) than it did in 2008.
In a couple of days, we’ll know the results of the presidential contest, and we’ll have exit poll numbers about the partisan breakdowns. That will give us an answer as to whether the polls correctly or incorrectly predicted the partisan make-up of the electorate.
But meanwhile, let’s all remember: polling is not some precise, exact science. It’s based on plenty of guesses – educated, smart, informed, guesses, yes, but still guesses – and assumptions. And there’s nothing wrong with questioning and thinking critically about whether the particular set of assumptions used in 2012 are the right ones.