The ‘Digital Voter Suppression’ Double Standard

(golubovy/iStock/Getty Images Plus)

Efforts to redefine negative campaigning imperil online speech.

Sign in here to read more.

Efforts to redefine negative campaigning imperil online speech.

A recent Aspen Institute report on “information disorder” takes aim at political advertising on social media. The report claims that “political campaigns” use what are known as “promoted posts” to “smear opponents and drive voter suppression.” That description — and the broadening of “voter suppression” to include negative political campaigning — is part of a growing trend that threatens political speech online. 

Voter suppression once meant poll taxes and purged voter rolls. The term referred to efforts to legally or physically interfere with voters’ attempts to cast their ballots. Attempts to beguile or confuse voters, on the other hand, have long been normal — if unsavory — aspects of political campaigning. Under this traditional standard, limiting voting times or closing polling places in an opponent’s strongholds amounts to voter suppression, but misleading voters about an opponent’s policy positions does not. 

Over the past few years, however, a new broad definition of voter suppression has emerged. It deems online misinformation, dirty tricks, and negative campaigning as voter suppression, inviting greater control of political speech on social media. Because this new definition is reserved for misleading political speech on the Internet, its use creates differences between the norms of on- and off-line political discourse. 

This disparate treatment contributes to partisan rancor when one party’s digital messaging is held to a different standard from the others’ analog speech. Radio and television providers rarely police political advertising. Expecting social-media platforms to quash lies and negative campaigning places them uniquely at the center of subjective partisan squabbles. 

*     *     * 

The new definition of voter suppression was first used to describe a 2016 hoax encouraging Twitter users to vote for Hillary Clinton via text message. Americans cannot vote via text message, so anyone who acted on the false claim would have missed the chance to vote. The pseudonymous troll behind the text-to-vote campaign has since been charged under a federal statute prohibiting conspiring to interfere with the rights of others. Whether or not the prosecution succeeds, social-media companies may prohibit this category of speech without great difficulty and without contributing to perceptions of partisan bias. 

Misinforming the public about polling times and locations can be seen as existing in a grey area between traditional voter suppression and mere political disinformation or negative campaigning. Like other misinformation, its effects can be corrected with more speech. However, false polling information misleads voters about the civic process, rather than about any particular candidate or viewpoint. Rather than persuading them to vote or refrain from voting on the basis of false information, electoral-process misinformation is intended to trick potential voters out of casting a vote. 

Whereas this category of electoral-process misinformation is fairly limited, the new “voter suppression” label is used much more widely. Take Vox’s Joss Fong, who makes the jump from procedural misinformation to general political misinformation by writing: “Twitter has a policy banning misinformation about voting procedures, Facebook and YouTube do too, but there are other types of digital voter suppression that may be harder to tackle.”

First looking only at Russian misinformation, she goes on to describe any messaging that “discourages participation in elections” as digital voter suppression. This is a much wider standard. It includes speech critical of American democracy and the two-party system, such as the slogan “Don’t Vote, It Only Encourages the Bastards,” alongside misinformation about polling locations. 

Various members of Congress have encouraged social-media platforms to suppress such “digital voter suppression,” despite the fact that many examples include constitutionally protected speech. Representative Maxine Waters (D., Calif.), for instance, has claimed that Facebook’s failure to fact-check political advertisements would lead to voter suppression: 

Last week, you announced that Facebook would not be doing fact checking on political ads, giving anyone Facebook labels a politician a platform to lie, mislead and misinform the American people, which will also allow Facebook to sell more ads. The impact of this will be a massive voter suppression effort that will move at the speed of a click.

Waters isn’t just talking about false voting dates disseminated by Russian trolls; she includes lies told by politicians while campaigning as a cause of voter suppression.  

On the other side of the aisle, Senator Ron Johnson (R., Wis.) argued that a tweet falsely claiming that he had strangled a dog should be deemed voter suppression on the grounds that “obviously, if people think I’m strangling my neighbor’s dog, they may not show up at the polls.” While such misinformation may dissuade voters from reelecting Senator Johnson, it is a far cry from telling them to vote after the election.

Indeed, proponents of this conception of voter suppression assume that nasty, negative campaigning may keep voters home but won’t inspire them to change their allegiance. For such misinformation to have a purely suppressive effect, voters would have to inflexibly vote with their party or not at all. This isn’t the case; voters are not the property of any one party. 

In 2018, then-senator Claire McCaskill (D., Mo.) introduced a bill to criminalize both misinformation about polling locations and false claims of political endorsement. This approach lumps misinformation about the civic process with much less harmful false claims, such as a 2016 rumor that the pope had endorsed Donald Trump. 

This particular claim came from a hoax clickbait site — it was intended to generate ad revenue, not sway voters. A sister site ran a similar story about the pope’s endorsement of Hillary Clinton. First Amendment protections for outright false speech aside, it would hard to avoid including satire in such a standard. The government may have a compelling interest in preventing miscast ballots, but who the pope supports is another matter. 

To their credit, some members of Congress have been more hesitant to embrace this approach. Senator Roy Blunt (R., Mo.) Senator offered an early warning about the consequences of this definitional creep for social-media companies. In a hearing held shortly after the 2016 election, he recognized that asking social-media firms to remove this broader category of “voter suppression” speech requires them to constantly make highly subjective politically charged decisions.  

I think we have to be very thoughtful here about who decides what’s voter suppression and what’s not; who decides what level of speech is acceptable and what’s not. It’s an unbelievable obligation that the government’s never been very good at, and an unbelievable obligation that it sounds like to me your companies are all being asked to assume.

Consider, though, that deceptive messaging seems to be held to an entirely different standard in other contexts. The recent Virginia gubernatorial race offers two instructive examples. 

In the first, the Virginia Democratic Party mailed Virginians literature touting Donald Trump’s endorsement of Glenn Youngkin. The ostensibly Republican-produced mailer featured a photoshopped picture of Youngkin and Trump, only mentioning in barely noticeable fine print that the endorsement announcement had been sent by the Virginia Democrats. 

In the second, just days before the election, the Lincoln Project arranged for a group of actors dressed as white nationalists to chant and pose for photos in front of a Glenn Youngkin campaign bus. (This example was particularly interesting, considering that while it occurred in public, both its spread and unmasking happened online.) Initially, some Democratic operatives reshared photos of the stunt as though it were real. Traditional media initially didn’t know what to make of the episode, and its coverage didn’t move as quickly as discussion of the photo-op on Twitter. While the stunt occurred off-line, few people saw it there — it was primarily a digital spectacle. 

Yet neither of these efforts to spread discouraging disinformation were deemed voter suppression. Why?  

In part, this double standard may derive from our experience with foreign disinformation in the 2016 election. After Russia’s divisive disinformation campaign, it’s been more difficult to separate domestic digital dirty tricks from something more sinister. Off-line trickery has been around longer and is less likely to be the work of foreign agents. 

Some of the problem may simply be partisanship. Democrats have led the charge against voter suppression in social-media ads, but gain little by calling attention to friendly disinformation elsewhere. Republicans have charged Democrats with hypocrisy for campaigning with disinformation while condemning it on social media, but have little interest in prohibiting false political speech or expanding the definition of voter suppression. When social-media platforms are called before Congress, Republican representatives are usually more concerned about overbroad moderation than Democrats’ speech.

The only exception to this partisan interest gap seems to prove the rule that online electoral speech is treated differently. Accountability Virginia, a Democratic PAC, ran ads on Snapchat questioning Glenn Younkin’s commitment to gun rights. While some of the ads included disclaimers that identified Accountability Virginia as their source, they were designed to look like messages from the NRA and targeted to conservative-leaning areas of the state.

Virginia utility provider Dominion Energy donated money to the PAC. When Axios reported on their misleading ads, Dominion asked for a refund of its donation. Nevertheless, Senator Tom Cotton (R., Ark.) sent a letter to Dominion complaining about the “voter suppression ads using misinformation about the Second Amendment.” The Snapchat ads, but not the mailers or the bus stunt, received the “voter suppression” label. While Senator Cotton excoriated Dominion for helping to fund the add, none of his ire was directed at Snapchat, which hosted the add despite its ban on “misleading” and “deceptive” political advertising. 

This double standard for the Internet is unhealthy and unsustainable. It has already spurred the introduction of bills criminalizing false speech and satire likely protected by the Constitution. It creates partisan distrust and places social-media firms in an untenable position. 

Government is given no greater constitutional leeway to police political misinformation online. While norms must carry the day, treating digital speech differently is unwise and unsustainable. The norms of American political discourse must either accommodate the same sort negative campaigning online as they do offline or hold off-line speech to a higher standard. The longer self-restraint is expected to maintain the current double standard, the less it will restrain political disinformation in either space. 

Will Duffield is a policy analyst in the Cato Institute’s Center for Representative Government.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version