Editor’s Note: The following is adapted from Jim Geraghty’s presentation earlier this week to the Austrian National Defense Academy, the Organization for Security and Cooperation in Europe, and the University of Vienna.
NRPLUS MEMBER ARTICLE I t is absolutely fair game and legal — in free countries, at least — for anyone to get online and say whatever he wants about American presidential candidates. If you want to say this candidate is a fool and that candidate is a bad guy, you’re perfectly free to do so.
What you cannot do is go online and pose as someone else. And if you’re a foreign intelligence service, we really don’t want you impersonating citizens of our free society in a surreptitious effort to sway public opinion, and even elections. Using deception to mask a foreign effort to influence an election violates the United States’ election laws. In 2018, 13 Russian nationals and three Russian companies were indicted on a variety of charges, including criminal conspiracy, wire fraud, bank fraud, and aggravated identity theft — all in attempt to influence our elections.
The Internet Research Agency, or IRA, is Russia’s primary tool for spreading disinformation and propaganda on the Internet. Beginning in 2013 and accelerating from 2014 to 2016, the IRA used social-media accounts and phony interest groups to sow discord in the United States through what it called “information warfare.”
In 2015, Marat Burkhard, a former employee of the IRA, spoke publicly about his time there, describing hundreds of people working twelve-hour shifts at the IRA headquarters in Saint Petersburg. He described the IRA recruiting people who could write and construct arguments in foreign languages. They were expected to write 135 comments or messages per twelve-hour shift, some assigned to argue in favor of the Russian government’s point of view persuasively and others assigned to argue against the Russian government’s point of view badly. He was himself assigned to write 135 posts about President Barack Obama chewing gum and spitting it out, and told to use a lot of profanity. The IRA began with message boards and website comment sections and eventually migrated to Twitter, Facebook, and Instagram.
Separately, the Main Intelligence Directorate of the General Staff of the Russian Army — known by the abbreviation GRU — hacked into the Democratic National Committee, which is the organization that runs Democratic presidential primaries, and the private email accounts of figures associated with Hillary Clinton’s campaign.
One of their intrusions into these email accounts was frighteningly simple. On March 19, 2016, the chairman of the 2016 Hillary Clinton presidential campaign, John Podesta, was sent an email that said
Someone just used your password to try ˜to sign in to your Google Account email@example.com.
Google stopped this sign-in attempt. You should change your password immediately.
CHANGE PASSWORD –
It offered a link to a site that looked like a password-reset form . . . where the bad guys could steal his new password, log into his account, and copy all of the emails in there.
This story gets worse. Podesta’s chief of staff forwarded the email to the operations help desk of Clinton’s campaign in Brooklyn, where a staffer wrote back concluding, “This is a legitimate email. John needs to change his password immediately.” The email looked genuine enough to fool the young tech guy who was supposed to know the real messages from the fake ones. In an effort to prevent his email from getting hacked, Podesta opened the door for his email to get hacked.
Which isn’t to say it was all that clever. If you need to change your email password, you can just go into the settings and change the password. You don’t have to click on any special link embedded in an email. The Clinton campaign’s operations help desk should have caught that.
This is called “spearphishing,” and the idea is to send messages to everyone in an organization and hope that at least one person chooses to follow their instructions. There’s an old saying that a chain is only as strong as its weakest link. Similarly, an institution’s computer network is only as secure as the most gullible people using it, and unfortunately for the Clinton campaign, that turned out to be Podesta and the help-desk staffer.
On April 12, 2016, the GRU gained access to the computer network of the Democratic Congressional Campaign Committee (DCCC), using credentials stolen from a DCCC employee who had been successfully “spearphished” the week before. The committee focuses on electing Democrats to the House of Representatives, and they share a virtual private network with the Democratic National Committee, which oversees elections at all levels — House, Senate, state governors, and the president. Once the Russians could get into the DCCC, they could get into the Democratic National Committee. That’s how they got the second batch of emails that they revealed to the public through WikiLeaks.
Now let’s turn to what Russia’s Internet Research Agency was doing on social-media networks.
From Saint Petersburg, the Internet Research Agency started creating “groups” on Facebook that normal users could join. All told, they created 470 Facebook groups meant to exploit political tensions around the U.S. elections. Most of these groups had names that sounded similar to existing political activist groups: “Tea Party News,” “Blacktivist,” “LGBT United,” “Stand for Freedom,” “United Muslims of America.” One of these groups, “Heart of Texas,” proposed that Texas secede from the United States and take the rest of the Southern states with them.
Some of these groups ended up attracting a lot of members. By the time Facebook deactivated them in mid 2017, the IRA’s “United Muslims of America” Facebook group had more than 300,000 followers, the “Don’t Shoot Us” group had more than 250,000 followers, the “Being Patriotic” group had more than 200,000 followers, and the “Secured Borders” group had more than 130,000 followers. Of course, it is worth keeping in mind that not everyone who joins a group on Facebook pays that much attention to it.
According to the U.S. House of Representatives Intelligence Committee, IRA-related pages created roughly 80,000 pieces of content — basically, posts and graphics — and this content was seen by more than 126 million Americans.
At first, that sounds terrifying. But there are a few caveats we should keep in mind. First, the majority of those Facebook pages and groups had almost no engagements or reaction from actual users. The top 20 IRA-created Facebook pages received 99 percent of the engagements. The IRA approach was to throw everything against the wall and see what would stick. They spent three dollars or less on about half of their ads. They would put it out there, see if it got any reaction, and if it did, they would spend more to get it in front of more people; if not, they would drop it and move on to another idea.
It’s also important keep in mind that we’re dealing with content that was created by Russians who, by and large, had studied American culture and politics only from a distance. The result was a lot of posts that were so exaggerated, so over-the-top, that they seemed too ridiculous to be genuinely persuasive. For example, a group called the “Army of Jesus” posted photoshopped pictures of Hillary Clinton, with devil horns protuding from her forehead, about to box with Jesus.
In fact, most of the materials generated by “Army of Jesus” are in this same over-the-top tone. If images like those influenced your decision, you were pretty likely to vote against Clinton already.
In fact, we know that these ads weren’t reaching people who were genuinely undecided. Facebook is constantly collecting information about its users, including age, gender, education and income level, job title, relationship status, hobbies, political leanings, favorite TV shows and movies, what kind of car they drive, and what kinds of products they buy. This is what makes the site attractive to advertisers: You can target and reach a particular kind of person.
The Russians weren’t so interested in targeting undecided people with their intense, divisive messages. They were more interested in amping up the anger and enthusiasm of people who already had political opinions.
We know from Facebook’s records that the IRA targeted people by location, job title, and preexisting political interest. One ad set from late September and early October 2016 targeted people in regions of Pennsylvania, ages 18 to 65, whose interests included “Donald Trump” and who had the job title “coal miner.” Being so precise may have hurt them; that ad was seen by 1,225 people, and only 77 people clicked on it.
Russian social-media efforts also focused on Bernie Sanders supporters who were frustrated by his narrow loss in the primaries to Hillary Clinton. One of the images the Russians used to court Sanders supporters, particularly targeted at Facebook users who were identified as gay and lesbian, was of a cartoon Sanders as a bright-yellow bodybuilder.
Bernie Sanders is not bright yellow like a character on the Simpsons, and he is not a bodybuilder. In the United States, we don’t make bodybuilders president; we just make them governor of California. That ad was seen by only 848 people and only 54 people clicked on it.
Having created these groups, the Internet Research Agency tried to organize protests and rallies. They would announce an event, promote it — they spent about $100,000 on advertising on Facebook. And then, when they found someone in the United States who reacted enthusiastically, they’d say that they couldn’t make it to the event and they’d ask that person to take over coordinating the event. Some of these operations were more successful than others — a few had about 200 people turn out, and some had almost no one.
Some of the Russian efforts were clumsy, unsuccessful, and just plain weird. One plan was to recruit someone to walk around New York City dressed up as Santa Claus in a Donald Trump mask. It seems they believed this would get people in New York, which Clinton won 79 percent to 18 percent, to vote for Trump.
The recently completed report by Special Counsel Robert Mueller determined that, on several occasions, Trump campaign officials retweeted, shared, or otherwise promoted pro-Trump messages from IRA accounts, not knowing that they were generated from Saint Petersburg. That’s a little unnerving, but it illustrates a reality of modern political campaigns: If somebody on social media is saying that your candidate is great, you don’t often spend much time checking that person out; you just retweet or share it with your audience because you want to believe that this reflects genuine grassroots enthusiasm for your candidate.
One fact that’s been overlooked in much of the public discussion over Russian efforts on Facebook is that, while 44 percent of total ad impressions (number of times ads were displayed) were posted before the U.S. election on November 8, 2016, the other 56 percent came after the election. Roughly 25 percent of the ads were never shown to anyone because Facebook’s algorithm didn’t deem them relevant to any users.
Some of the ads that ran after the election were designed to stir up opposition to Trump. During the election, Russian efforts aimed to promote Trump over Hillary Clinton, but the ultimate goal of the Russians was not merely a Trump presidency; it was to have American society as divided and angry as possible. The more divided we are, the less likely we are to effectively stand up to them when they do something we oppose.
If you’re on Twitter long enough, and particularly if you write about politics, you will get a reaction from someone, or maybe multiple people, whose Twitter identities are a name, initial, and seven or eight random numbers. Often, but not always, those folks will have something nasty to say to you.
Back in August 2017, one social-media analyst studied accounts that used two hashtags, “#UniteTheRight” and “#FireMcMaster” (referring to former U.S. national security adviser H. R. McMaster, who was seen as a particularly tough-on-Russia figure in the first year of the Trump administration). He found 824 accounts with handles that used a name and random digits. Then he collected all the followers of those Twitter accounts, and all the followers of those followers. He found more than 63,000 accounts that were using “Marcus,” “Margaret,” and other names followed by random numbers.
One of the users who had the most of these accounts following him, named “David Jones,” claimed to be a pro-Brexit activist who lived on the Isle of Wight. The analyst found that “David Jones” always tweeted all day long from 8 a.m. to 8 p.m. Moscow time. Almost like it was his job or something. And “David Jones” seemed to feel strongly that Ukraine should not join the EU and that the United Kingdom needed to “team up with Russia to put the EU in its place.”
By January 2018, Twitter publicly identified 3,814 Twitter accounts associated with the IRA. According to Twitter, in the ten weeks before the 2016 U.S. presidential election, these accounts posted approximately 175,993 tweets, “approximately 8.4 percent of which were election-related.” This means that 91 percent of the tweets weren’t election-related. Maybe the IRA folks figured out that to be seen as a believable account, they had to tweet about non-political topics as well.
Twitter also announced that it had notified approximately 1.4 million people who may have been in contact with an IRA-controlled account.
And it gets worse. Once Twitter identified the IRA-controlled Twitter accounts, one group of media-watchers and technology analysts decided to investigate how many of those IRA accounts were quoted by professional news organizations. They looked at 33 news outlets; 32 quoted at least one IRA account. And few corners of the media were spared: The left-of-center Huffington Post quoted them 16 times, the right-of-center Daily Caller quoted them 15 times. USA Today, nine times. Most of the time, the tweets were quoted as examples of public opinion.
The study noted that the liberal outlets often mocked right-leaning IRA accounts. Russian users in Saint Petersburg were providing the ridiculous over-the-top “conservative” perspective that liberal publications wanted to see. Liberal publications treated them as if they were real. “Don’t treat everything you read on the Internet at face value,” isn’t just good advice for ordinary users; it is a message that some professional journalists need to hear.
Much like their activity on Facebook, while the Russians created a lot of pages on Instagram, some had a huge level of engagement while the majority had almost none. Forty pages received 99 percent of the 185 million “likes” received by IRA content.
It is possible that the IRA’s Instagram engagement was the result of “click farms,” an artificial way of generating lots of clicks, shares, and other engagement.
Interestingly, there was less media coverage of Russia’s efforts on Instagram compared with Facebook and Twitter. This is for three reasons. First, while Facebook and Twitter combine visuals and written text, Instagram is more photography-focused, which leaves less room for political messaging. Second, Instagram’s content focuses less on politics and recent events, so the people using it are less likely to react to political messaging. Third, the Instagram user population is younger. A study earlier this year calculated that 60 percent of its users are between 18 and 24 years old, and 90 percent are younger than 35. Younger voters are less likely to show up on Election Day. If you’re seeking to influence voters, you may decide that the Instagram users aren’t as high a priority as Facebook or Twitter users.
The IRA appears to have begun making YouTube videos in September 2015, producing 1,107 videos across 17 channels. A few channels were active until July 2017. Interestingly, the IRA YouTube effort focused overwhelmingly on one topic: 96 percent of its channels’ content related to Black Lives Matter and police brutality. Some have speculated that because African Americans generally vote Democratic and would be more likely to vote for Hillary, the IRA messaging aimed at these communities didn’t mention the election. Those that did mention the election pushed the message that Hillary Clinton was a liar, that there was no difference between Clinton and Trump, and that African Americans should stay home because neither candidate “deserved” their vote.
Russia blazed the path here, but other countries and regimes are following. In August 2018, Twitter announced it had suspended 770 accounts — all appearing to originate in Iran, with potential ties to its government — for engaging in “coordinated manipulation.” Facebook and YouTube similarly spotted and took down Iranian content, some of which had been linked to state-owned media.
These accounts were uncovered not by the NSA or other U.S. law-enforcement agencies but by a private cybersecurity firm. The firm figured out that the contact emails for sites such as the “Liberty Front Press” and “Instituto Manquehue” were associated with advertisements for website designers in Tehran and with Iranian sites. They found supposedly American Twitter accounts that were linked to phone numbers with the Iranian country code. And they found a number of accounts, supposedly belonging to American Sanders supporters, heavily promoting Quds Day, a holiday established by Iran in 1979 to express support for Palestinians and opposition to Israel.
Again, the people who do this sort of thing are often clumsy and sloppy. They’re heavy-handed with their propaganda because the client — meaning the foreign government — doesn’t want nuance or a balanced perspective. They’re trying to imitate the tone of a person in a country that isn’t their own, and it often shows.
So why don’t social-media companies just ban these accounts? In most cases, as the companies identify these accounts, they do. But we should keep in mind the difficulty of the task.
It doesn’t take much time or cost any money to set up a Facebook, Twitter, or Instagram account. You don’t have to pass any background check other than maybe providing a phone number. Facebook, Twitter, and Instagram want people to be able to set up accounts easily and at no cost.
But this means that the bad guys can set up accounts quickly and easily too. Facebook can shut down one account and they can go and set up another one with a new name.
Social-media advertising that is secretly funded or run by foreign intelligence is going to be tough to stamp out. It’s not hard for a foreign intelligence service to move money to some front company, group, or individual, and have them start pumping out memes and messages to favor one candidate, attack another, or divide Americans or any other group of voters.
So how can we respond to this?
One bit of good news is that United States government is on the case. Before Election Day 2018, the U.S. Department of Defense’s Cyber Command announced that it would be sending text messages, emails, and pop-ups to Russian operatives meddling in the midterm elections, informing them that their actions were being monitored — sort of a “shot across their bow” to signal that we know who they are, what they’re doing, and how to find them.
Then, on Election Day 2018, the professional trolls at Internet Research Agency in Saint Petersburg showed up for work and could not access the Internet. At all. For two days, they couldn’t log on to any of their social-media accounts.
Maybe this was U.S. Department of Defense’s Cyber Command. Maybe this was the United States National Security Agency. Or maybe some Russian just tripped over a wire and unplugged something.
For whatever reason, the Internet Research Agency could not spread any rumors about illegal votes being cast or votes not being counted or anything else to undermine public faith in the election.
But what do we do if we’re not working at Cyber Command or the NSA, or in cybersecurity at one of the big social media companies?
Again, most producers of disinformation on social media are pretty clumsy. Earlier this year, someone forwarded me an email that claimed to tell the tale of two Americans who went to Paris and found streets filled with trash, African and Middle Eastern refugees everywhere, Muslims putting down prayer mats and praying on the subway, and that the Louvre was deserted except for the army on patrol. The email clearly meant to deliver the message that Paris was overrun by refugees and terrorists and that we can’t let that happen to our country. It included some photos that were allegedly taken by the couple.
I did an image-search on Google for all of the photos in the email and found that the photos were mostly from a riot in Calais in 2016. The picture of a praying guy was from Copenhagen, Denmark. My favorite was an image that supposedly showed a guy urinating on the Paris Metro. The map on the inside of the car shows the New York subway system.
The people who generate this sort of disinformation tend to be sloppy.
But perhaps most important, disinformation doesn’t work on people who already know the truth. If I try to persuade you that your president has two heads, you’re probably not going to believe me. You’ve already seen your president and you can count.
The best defense against disinformation is a better-informed and less credulous public that doesn’t automatically believe everything they read on the Internet and doesn’t gleefully share any information they encounter that reaffirms their preconceptions.
The person who sent me that email about Paris and immigrants felt embarrassed for being fooled and angry at the person who sent it to them. There’s room for legitimate debate about immigration, refugees, security, terrorism, multiculturalism, and assimilation. You can’t advance that debate by making up a story and looking around for any photo you can find that makes someone with dark skin look bad.
Those of us in the news media have a duty to keep our eyes on what’s being spread around social media and to carefully and even-handedly separate truth from fiction. Fact-checking is important, and it’s also really important to not be condescending or snide to the people who may be initially fooled by provocative posts. If you attack someone or call him a fool, he’s going to dig in further.
Forewarned is forearmed. We know that it’s very likely that Moscow and other hostile regimes are going to try stunts like this again. Now it’s easier to call them out when a Twitter account named with a random sequence of numbers is suddenly passionately arguing in favor of Russia’s position.
Russia, like any other regime in an unfree country seeking to influence the elections and political decisions of free countries, has a lot of tools to spread disinformation. But its agents are not super-geniuses. They’re not ten feet tall. And they can’t brainwash you. They can’t make you believe something you weren’t already inclined to believe.
Two final thoughts. First, we should not use the problem of Russia or other countries trying to exploit political or social divisions as an excuse to try to shut down discussion of divisive issues. The fact that Russians generated a lot of content designed to get African Americans upset about police brutality doesn’t mean that African Americans don’t have legitimate concerns about police brutality. The fact that Russians generated a lot of content designed to stir up animosity about illegal immigrants doesn’t mean that Americans don’t have legitimate concerns about illegal immigration. Nothing will add more fuel to the fire than telling people, “What’s bothering you is not a real problem.”
Finally, there’s always one big difference between disinformation meant to exploit divisions and genuine political activism. Almost nothing produced by the IRA proposed any solutions. When they did propose solutions, they was usually something extreme like “secede from the union!”
The IRA had no interest in giving you a sense that a problem can be fixed; it was all about making you angrier. They want you to feel helpless. They want you to feel ignored and to conclude that the traditional methods of democracy — speaking to your representatives, litigating through the courts, following the ethics and principles of your Constitution — have failed and that the only way to fix it is to get angrier, often at your own countrymen. Perhaps the single best way to avoid disinformation, then, is to have some faith in our fellow Americans, and expect that they too want what’s best for the country.
Something to Consider
If you enjoyed this article, we have a proposition for you: Join NRPLUS. Members get all of our content (including the magazine), no paywalls or content meters, an advertising-minimal experience, and unique access to our writers and editors (conference calls, social-media groups, etc.). And importantly, NRPLUS members help keep NR going. Consider it?
If you enjoyed this article, and were stimulated by its contents, we have a proposition for you: Join NRPLUS.