Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Can States Regulate Social Media Without Drawing First Amendment Challenges?

Despite free speech challenges, state legislators have continued passing laws that age gate websites or override platforms' terms of service. Experts say there are ways to protect users without drawing First Amendment lawsuits.

The entrance to the Supreme Court building on a sunny day with a clear sky.
(Shutterstock)
In Brief:
  • Reflecting increasing concerns around teen safety online, legislators around the country have introduced over 300 bills to protect them. Several states' bills have received challenges under the First Amendment.

  • Legal experts critical of these laws note that they create potential First Amendment violations on the basis of limiting teens' rights to self expression. Additionally, legitimate concerns around privacy may stop adults from using the same websites, forming a "chilling effect."

  • Experts suggest two potential options for legislators interested in protecting social media users: algorithm auditing to discover risk and narrower, more nuanced legislation targeting specific harms.


  • Just this year, legislators in at least 30 states have introduced more than 300 bills focusing on minors’ use of social media. The bills reflect a growing concern from state lawmakers about how — and whether — social media companies are protecting their users.

    Recent studies have shown that social media app usage maycontribute to an increase in teens' experiences with depression.

    “These are all well-intentioned, and I really applaud these legislators who are drafting these laws," says Nancy Costello, the director of Michigan State University’s First Amendment Law Clinic. But in her opinion, "they’re taking too sweeping of an approach to try and solve this problem.”

    Several states, including Louisiana, Mississippi, Montana, North Carolina, Florida and Texas, have laws on the books mandating some form of age verification for social media users, often attempting to ban minors from these platforms. (Many of these laws limit social media access for minors under 18 and keep minors under a certain age, usually 16, from creating and using social media accounts without parental consent.)

    Instead, Costello and other experts believe, state lawmakers should be looking for ways to legislate more narrowly, setting up guardrails that are less likely to draw First Amendment challenges. In particular, lawyers who spoke with Governing suggested lawmakers should target predatory algorithms and legislate around specific harms.

    In the past year, several states with laws that aim to keep minors off of social media have been sued by free speech organizations as well as trade organizations working on behalf of social media platforms. These organizations accuse states of potentiallysilencing teens' voices by limiting their access to social media or restricting the visibility of their posts. Some legal experts have also held that such laws could run up against free speech rights because they disregard the First Amendment right to anonymity, a right reaffirmed almost 30 years ago in the landmark free speech case McIntyre v. Ohio Elections Commission.

    TheFoundation for Individual Rights and Expression, a First Amendment advocacy organization that has fought against “cancel culture” on college campuses, for example, challenged Utah’s social media act shortly before it could go into effect in March on the basis of limiting teens’ rights to self expression. Other states that have been sued for their social media laws include Arkansas, Ohio, California and Mississippi, with challengers criticizing the role the government could play in controlling who has access to social media.

    Another potential issue is the “chilling effect” age gating could create for adult users hesitant to upload proof of their age to companies, says Thomas Berry, editor-in-chief of the Cato Supreme Court Review, part of libertarian think tank The Cato Institute.

    “When someone has [a] legitimate concern [around data breaches/losing anonymity], and that concern causes someone not to visit a site that they otherwise would have if they were able to visit without that age gating, that’s a limitation on speech,” Berry says.

    Legislation moderating social media has even made it all the way to the Supreme Court: In July, the courtissued an opinion finding that two state laws (both of which aimed to protect conservative viewpoints on social media) attempted to curate and moderate platforms in ways that “interfere with protected expression.” The decision was a “victory for the free speech rights of online platforms,” Berry wrote in an essay for Cato, because it acknowledged social media companies’ right to control the content on their platforms. (The court did not rule on the laws themselves, but bumped them back to the lower courts to officially rule on all the constitutional questions).

    So how might states looking to protect young social media users avoid running up against free speech challenges?

    One alternative is to enact legislation focused on algorithm auditing, Costello says. She's had a hand in developingmodel legislation proposed by Harvard’s T.H. Chan School of Public Health that seeks to discover and measure the potential impacts of social media harm.

    “With that proof of harm, you go back to what is a well known legal cause of action: deceptive advertising and unfair business practices,” Costello explains. “And that’s how we think we can force social media companies to mitigate this harm.”

    Last year, New York enacted algorithm auditing as part of the state's attempts to regulate "automated employment decision tools" that use AI to screen job applicants. These required audits provide the statistics on how often diverse job applicants advance when employers use automated hiring tools. Similar legislation for social media algorithm audits could require that companies hire a third-party firm to develop and track outcomes. The model legislationdeveloped at Harvard, for example, measures how much content about eating disorders can be seen by teens on social media platforms without them seeking it out.

    Chris Marchese, the director of the Litigation Center at trade association NetChoice — which has also brought many free speech challenges to social media bills — says artificial intelligence regulations could be a sample to follow for regulating social media.

    At the state level, AI laws take a wide variety of forms. Many pieces of AI legislation that have either passed or are pending focus on legislating deepfakes and deceptive media generated by AI. These bills are usually narrow, targeted towards a specific implementation of the technology or a stated harm.

    “I'm hoping that the mentality that lawmakers are having on things like AI might carry over into the online safety space,” Marchese tells Governing, “because it really isn't the case that we have to violate the Constitution in order to protect kids.”
    Zina Hutton is a former staff writer for Governing. She has been a freelance culture writer, researcher and copywriter since 2015. In 2021, she started writing for Teen Vogue. At Governing, Zina focused on state and local finance, workforce, education and management and administration news.