June 27, 2022, Issue

(Roman Genn)

Ban Kids from Social Media

The case for keeping them offline

Christine Rosen

Why don’t we have a legally enforceable age requirement for the use of social media? As a society, we long ago agreed upon age-restriction laws governing a range of behaviors (driving, voting, enlisting in the military, smoking, drinking alcohol, getting a tattoo). Why do we treat social-media use differently?

A recent survey by Common Sense Media of social-media use found a significant increase in the number of children ages eight to twelve (so-called tweens) using social-media platforms such as Snapchat, TikTok, and Instagram. “The huge number of kids using social when they’re so young — it makes me want to cry,” Diana Graber of Cyberwise told the New York Times. “These social-media apps are not designed for children.”

And yet for far too long we’ve effectively acted as if they were, because we’ve done little to prevent children from having access to them. The age limit of 13 that currently governs social-media platforms was arbitrarily chosen as part of the Children’s Online Privacy Protection Act (COPPA), which came into effect in 2000 (four years before Facebook was created). It was meant to restrict how companies could use children’s data as well as requiring “verifiable parental consent” for those younger than age 13.

As anyone who has ever stumbled across an eleven-year-old’s Instagram account will tell you, however, the system never worked. The request to verify one’s age is merely a suggestion, with no real effort at verification. There are no financial or legal repercussions for the companies that fail to confirm the ages of their users and every incentive financially for them to look the other way as underage kids create accounts. You could call it an honor system, but there is little that is honorable about the goals these social-media companies have set for drawing ever-younger users to their platforms.

Other countries have stricter rules. In Germany, Ireland, and Switzerland, for example, the age of digital consent is set at 16, with more legal requirements in place for age verification. Massachusetts senator Ed Markey, one of the original sponsors of COPPA, wanted to set the U.S. age at 16 but was thwarted by lobbying both from technology companies concerned about profits and from civil-liberties groups that, according to a Wall Street Journal story about COPPA, feared that “requiring teens to obtain parental permission might curtail their ability to access information about birth control and abortion.” Of the age-13 compromise, Markey said, “It was too young and I knew it was too young then. It was the best I could do.” 

Since the earliest days of social networks such as Facebook, people concerned about these platforms’ impact on children have heard a consistent message: It’s up to you, as parents, to control your children’s use of these tools. Teach them media literacy! Monitor screen time! Delay getting them a smartphone! 

All of this is good advice (I know; I’m one of many people who made such arguments in the past), and it rests comfortably within a conservative worldview that is rightfully skeptical of turning to federal-government mandates or heavy-handed legal solutions to solve family problems. 

Perhaps it is the retreat into digital spaces that happened during pandemic lockdowns and the difficulty many children are having now as they try to emerge from that forced cocoon; or perhaps it is the constant stream of stories about the harms that social media inflict on younger children (e.g., “Snapchat dysmorphia,” by which people become unhappy with their bodies and seek to alter them so as to appear more like the heavily filtered images on social-media platforms). But whatever the precise causes, in recent years, something has shifted, and the attempts of individual adults to set limits for their kids have come to seem less like responsible parenting than like rearranging deck chairs on the Titanic

For many parents, even those who place limits on their children’s use of platforms and devices, the problem of social media is no longer a private one. Social media drive political and cultural and educational conversations to such a degree that a collective solution to the problem they pose for children is long overdue. We have enough evidence of the harms and dangers of social-media use by children, as well as plenty of examples of the lack of concern demonstrated by the companies profiting from children’s use of the platforms, to acknowledge that the “best we could do” 20 years ago at the dawn of the social-media era is no longer good enough. 

The law is a blunt instrument for solving complicated social problems, and yet sometimes it is the only one at hand. 

Social-media use should be limited by law to adults, or at the very least to people 16 years old and older.

The original age limit for social-media-platform use was created to protect children from targeted advertisements and the collection of their data. Today, the harm of social media isn’t the ads; it’s the use of the platforms themselves. It’s the daily opportunity cost placed on children who spend an increasing amount of their time on social media instead of engaged in other social activities.

Social-media use by children has spawned bizarre social-contagion effects, far beyond whatever this week’s viral TikTok challenge might be (e.g., the “devious licks” challenge that encouraged kids to damage school property). A study published last year in Archives of Disease in Childhood examined the increase in the number of teenagers suffering from tic disorders; it located one possible cause in the popularity of TikTok videos of “influencers with symptoms.” “Some teenage girls report increased consumption of such videos prior to symptom onset, while others have posted videos and information about their movements and sounds on social media sites,” the study found. “They report that they gain peer support, recognition and a sense of belonging from this exposure.”

The technology companies know that the platforms they are selling can be harmful for many users. Internal research by Instagram (which is owned by Facebook’s parent corporation, Meta) found that even though one-third of teenage girls who use Instagram report that it “made them feel worse,” they found themselves “unable to stop” using it. A large, multi-year study by researchers at Cambridge University published this year found that social-media use negatively affected life satisfaction and mental health for children, particularly around puberty (ages 11–13 for girls and 14–15 for boys). Coverage of these findings always strains to point out the many positive effects of social-media platforms. “It carries risks — peer influence, contagion, substance use,” one researcher told the New York Times. “But it can also carry lots of positive things. . . . I think a lot of times that does get overlooked because we’re so focused on risks.”

But researchers don’t like to acknowledge that they are working with limited information. To thoroughly weigh the benefits and risks that these platforms pose to children requires data that scholars can subject to granular analysis. Those data — your data and your children’s data — are owned by the social-media companies. And they won’t share them. 

The risks we do know of are significant and growing. Children have gotten used to being harassed and threatened online in a way that would have been alarming to previous generations. As the Washington Post reported, the shooter in the recent Uvalde, Texas, school massacre regularly used social media to threaten and harass people, albeit anonymously, on livestream social-networking platforms such as Yubo (slogan: “Get friends. Get real”). No one thought to tip off authorities because the threats were “seen by strangers, many of whom had never met him and had found him only through the social messaging and video apps that form the bedrock of modern teen life.” Yubo, which the Post calls “Tinder for teens,” has been downloaded 18 million times in the U.S. “I witnessed him harass girls and threaten them with sexual assault, like rape and kidnapping,” said one teenager who spoke to the paper. Another said this was just “how online is.”

The list of social-media platforms grows every year: The grand dame, Facebook, is now just for old people; but Instagram and WhatsApp (both owned by Meta) remain popular among the young. Then there’s Snapchat, Discord, YouTube, and Twitter. And of course the Chinese-owned TikTok. All of these platforms measure success by the amount of user engagement, regardless of whether that engagement is angry, violent, conspiratorial — or from underage children. 

Indeed, technology companies are betting the future on social platforms that are even more immersive. Meta hopes to create a form of “social virtual reality (VR)” in the Metaverse that would use VR technology (goggles, eventually sensors) to allow social-media users to interact with one another, raising a host of issues about the impact of such technology use on children. (VR headsets currently aren’t considered safe for use by children, but no doubt that concern will fade as the profit-making possibilities of VR kids’ games and platforms increase.) 

Which is why we should take the opportunity now to impose some limits on children’s use of these platforms. This is not a new proposal. I first heard it floated by the American Enterprise Institute’s Yuval Levin, who mentioned it as a potential solution during a group conversation about technology. Last year, Ben Pring argued in TechCrunch that social media should be limited to adults. “We don’t let young people drive, drink, smoke, get married, join the Army, get a tattoo or vote until we feel they’re old enough to handle it,” he noted. And in a recent issue of National Affairs, Chris Griswold offered an extensive analysis of regulatory efforts, including age restrictions, that could be adopted to protect children from the harms of social media.

There is a contradictory impulse at play in the culture right now: The cult of safetyism regarding children (wherein everything must contain a warning label, playgrounds must come equipped with cushioned ground cover and no sharp angles, and helicopter parents are expected to hover nearby with helmets and healthy snacks available at all times) dominates the offline world. By contrast, there is little more than resignation and hand-wringing when it comes to children’s online activities. 

At the same time, high-risk behaviors in which children engaged in greater numbers in previous eras (drunk driving, smoking, sex) are declining, while rates of mental-health disorders are climbing precipitously. As a report on adolescent mental health in the New York Times noted:

In 2019, 13 percent of adolescents reported having a major depressive episode, a 60 percent increase from 2007. Emergency room visits by children and adolescents in that period also rose sharply for anxiety, mood disorders and self-harm. And for people ages 10 to 24, suicide rates, stable from 2000 to 2007, leaped nearly 60 percent by 2018, according to the Centers for Disease Control and Prevention. 

Even for children not suffering from mental-health challenges, the time every day spent on social media (an average of five hours, according to Common Sense Media) has clearly replaced other, healthier social activities for children. Data from surveys such as the CDC’s Youth Risk Behavior Surveillance System and the Monitoring the Future Survey from the National Institute on Drug Abuse show that tweens and teens today get less sleep and less exercise and less in-person time with their peers than did previous generations. 

Yet instead of imposing some serious limits on who can use these platforms, parents are told to “ask your kids how they feel” about social media and “ask them what they’re looking at and doing online.” This shifts the burden of proof of harm to parents, who must assess the risk with only limited information from the companies that create and manage these platforms. An across-the-board, enforceable age limit would place the burden where it belongs: on the companies that have allowed children free rein on platforms designed for use by adults. 

The objections to raising the age limit for social media are easy to predict:

It would be impossible to enforce.

In fact, age-verification systems of many types are already available and could easily be used for this purpose. No, liquor laws did not automatically end underage drinking, and there will always be people who subvert the law to their own ends. But just because some people evade a law doesn’t make it wholly ineffectual or unworthy of having on the books. 

Fears about social media are just this generation’s version of a moral panic.

No one wants to be labeled their generation’s Mrs. Grundy. In the 1980s, former second lady Tipper Gore and her Parents Music Resource Center were pilloried as priggish censors for arguing that warning labels should be placed on albums with explicit lyrics. Today, we live in a world where those warning labels are still there, but groups such as Migos perform “Bad and Boujee” (sample lyrics: “F***in’ on your b****, she’s a thot, thot. Cookin’ up dope in the crockpot”) on daytime talk shows such as Ellen while the middle-aged ladies in the audience dance along, albeit to a song with every other word bleeped out for viewers at home. 

The lesson here isn’t that moral panics quash free expression; it’s that cultures change, but not always for the better. It’s important to recognize when the change is going in the wrong direction and to do something about it. 

Enforceable age limits for social-media use would not require censorship. Instead, it would treat social media as technology companies have long claimed they want us to treat their technologies: as tools. Like an automobile, social media have both benefits and serious potential risks if used irresponsibly. Age limits would treat social-media platforms as tools that require some maturity and training to operate. Such forms of regulation are apolitical but would help parents in their efforts to keep kids safe, as driver’s licensing and seat-belt and liquor laws do. 

Age-limit laws unfairly restrict kids from using something that is useful for them in connecting with others.

Can a 16-year-old use social media responsibly and beneficially? Perhaps, depending on the platform and depending on the child. 

But the evidence continues to roll in that even adults are often unable and unwilling to use such platforms in a responsible manner, and the ill effects of those decisions have been felt in our politics and culture for some time now. 

Raising the age limit for social-media use to 16 (or 18) would seem draconian to a generation of adolescents already habituated to spending hours on Discord and Snapchat. But as we embrace new technologies, we need to assess their benefits and harms honestly, and not continue to act as if everyone will use them the way they were ideally intended to be used. When automobiles became more common in the 1920s, states began setting age limits and licensing requirements for their operation (after witnessing the damage that could be caused by someone untrained behind the wheel). By the mid 20th century, most states had adopted 16 as the minimum age, with license requirements that varied by state. 

We have had almost 20 years of social-media experience, and Americans are just now beginning to express a desire for greater oversight of these platforms, often on issues related to censorship and content removal. Shouldn’t we also reassess the protections in place for children? 

We cannot turn back the clock and force a generation of children not to grow up online, but neither should we ignore our unease about the world our teenagers and younger children inhabit. That unease reveals a larger fear about the habits of mind that social-media platforms encourage and the sense of self they inculcate in children. They strip away layers of the private self, incentivizing people to perform their lives online and rewarding them for behavior that doesn’t support the formation of good character and judgment. And as we have seen, despite the promises of connection made by social-media companies, these platforms have not obviously created greater unity; on the contrary, they seem to have contributed to greater polarization and tribalism. 

Large social-media companies can afford to lose the revenue they now generate from child influencers and tween TikTok users. The more important question is: Can we afford to lose another generation of children to the whims of social-media platforms that were made for adults, built to keep you engaged (and enraged) as much as possible, and designed without concern for their impact on our most vulnerable citizens? Perhaps one day someone will create a perfectly designed social-media platform that brings only happiness and joy to children’s lives, or parents will hit upon the perfect formula for monitoring their kids’ screen time. But that day is a long way off. For now, we should put social media legally off limits to children.