Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

To Fight Media Disinformation, Look at Algorithms, Say Experts

The nation is debating Section 230 reform, but fighting social media disinformation may be less about what users can say than about how platforms can amplify and recommend it, said MIT panelists.

A computer screen showing a group of people on a video call.
Top (left to right): Renee DiResta, Sinan Aral, Jeff Kosseff; Bottom: Yael Eisenstat, Richard Stengel
In the national discussion over retooling social media regulations, what users are allowed to say may be less important than what platforms do with the content once it’s said, according to panelists at the recent Social Media Summit @ MIT.

Debate has been waging over whether the government should change Section 230 of the Communications Decency Act, which shields platforms from liability for content posted by their users. But public officials who want to curb the spread of inaccurate information and growth of violent movements over these sites could be more impactful by taking on the way platforms’ algorithms amplify content and recommend connecting with extremist users on the sites, said researchers, journalists and legal experts speaking at the Summit.

“You have these platforms … who love to make sure we keep talking about this as a free speech versus censorship problem because that benefits them, because then they don’t really have to discuss the harder issues,” said panelist and Future of Democracy Fellow Yaël Eisenstat, who previously headed the Global Elections Integrity team of Facebook’s Business Integrity unit during 2018.

Should someone attempt to sue Facebook on charges of complicity in the Jan. 6 Capitol attack, Einestat said, “I suspect Facebook will try to use the Section 230 argument to say ‘We are not responsible for the conspiracy theories these users are posting.’… Yes, but did your recommendation engines, your targeting tools, your curation, did it do anything of these things that actually helped facilitate this crime?”

An April 27 Congressional hearing is slated to confront this question of algorithms, and MIT panelists recently offered their own views into whether and how the government can intervene in social media regulation, and the potential challenges and ramifications.

What’s at stake


A Facebook internal task force recently documented the site’s role as a breeding ground for the spread of falsehoods that festered into the Jan. 6 attack on the Capitol, as well as an organizational tool for the insurgents. This and other high stakes events may be raising the tempo on discussion over government intervention into the platforms’ workings.

Social media platforms didn’t invent falsehoods, but they have enabled incorrect information due to posters’ mistakes (misinformation) and deliberate lies (disinformation) to proliferate rapidly, and common social media features like tailored newsfeeds can mean users live inside echo chambers in which false information is reiterated until normalized.

The platforms were designed to encourage high user engagement — not promote considered public discourse — and the algorithms that determine what items appear in users’ newsfeeds do not distinguish between facts and falsehood, several panelists said. The result is that users may see inaccurate stories repeated without contesting views mixed in, and individuals can struggle to be aware of — let alone dispute — news items not presented to them in their own newsfeeds.

Ali Velshi, NBC correspondent and host of MSNBC’s “Velshi,” said the result is that residents often cannot agree on what is real, so debates about political and social issues become hung up on efforts to establish basic facts. Conversation cannot progress to productive debate over how to respond to realities described by those facts.

“You never get to the discussion of, ‘What should good policing look like?’… [or] ’How should we deal with the application of health care throughout this country?’” Velshi said.
A computer screen showing a group of people in a video call.
Top (Left to right): Clint Watts, Sinan Aral, Ali Velshi. Bottom: Maria Ressa, Camille Francois

The Section 230 question


Government has a variety of tools at its disposal for combating the spread of harmful inaccuracies over social media.

The platforms — like all private companies — have no obligation to provide free expression, and current law already bans certain speech, said Richard Stengel, former U.S. under secretary of state for Public Diplomacy and Public Affairs. He noted that these sites actively monitor content to remove child pornography and content that violates copyrights. Governments simply need to update or repeal Section 230 or pass new legislation to make removing conspiracy theories or other harmful content similarly incentivized.

“Facebook can make any [content] law it wants, but unless you give them more liability, they won’t take content down,” Stengel said.

That added liability could have unintended consequences, however, said Jeff Kosseff, United States Naval Academy cybersecurity law professor and author of a history of Section 230. Social media platforms’ legal advisers are likely to recommend platforms remove even genuine news items that prove controversial or provoke complaints, to minimize the risk of lawsuit, he said.

Part of the complication is that some content is clearly problematic while other posts are more subjective, and platforms are wary of drawing the wrong conclusions. Stengel said that government regulation can spare platforms from that risk by offloading that decision-making and responsibility to public policymakers.

“They [platforms] want to be regulated more — they don’t like being in the gray area of subjective decisions; they want to be able to say, ‘Well, the government made me do this,’” Stengel said.

Speech vs. amplification


Flagging posts for removal and banning users who breach policies is often an uphill battle given how fast content is created and spread. Summit host and MIT Sloan School of Management professor Sinan Aral pointed to findings from a 2018 study he had co-authored that revealed inaccurate stories are retweeted faster than true ones and reach 1,500 users six times sooner. Real humans played a greater role than bots did in spreading the falsehoods.

This may point to the heart of the problem — not that some users and bots are posting disinformation, but that the platforms’ algorithms are then proactively recommending the posts to a wide audience and encouraging reposting.

“There’s always been this division between your right to speak and your right to have a megaphone that reaches hundreds of millions of people,” said Renée DiResta, research manager at Stanford Internet Observatory. “There’s value to you being able to post your … views but [the] platform doesn’t need to boost it.”
A woman wearing headphones speaking on a video call.
Renee DiResta
Eisenstat similarly said that violent movements planned over Facebook such as the 2020 attempted kidnapping of Michigan Gov. Gretchen Whitmer take off in part because social media platforms’ recommendation engines can proactively network those who later become perpetrators. The platforms recommend users connect with particular other users and groups that the individuals might otherwise not think to search for. The recommendation engines also are behind curating inaccurate “news” posts into users’ feeds, disseminating the misconceptions, she said.

There may already be tools on the books to tackle this, with Eisenstat subscribing to the view that Section 230 is most accurately interpreted as protecting platforms from liability only for user’s speech, not for the actions of the platform’s own engines.

A variety of approaches


Platforms can rework their designs and strategies to slow disinformation — if public regulations push them to.

Social media companies could deploy fact checkers on standby to flag or remove inaccurate posts in advance of major events like elections, DiResta suggested, and platforms can redesign the user experience so account holders are prompted to reconsider whether they’ve fully read content or know it is accurate before reposting. Social media companies could also be required to fact check paid political ads that users might reasonably assume have been verified by the platforms, Eisenstat said.

Stengel also proposed reconsidering speech regulations in a larger way, suggesting state and local governments tackle certain harmful content by passing ordinances against hate speech.

An ideal strategy may not quickly present itself, but Eisenstat said the government should focus on passing something that at least makes things better.

“Everyone thinks we need to have the end-all, be-all, silver bullet, one piece of legislation that suddenly makes social media a lovely, healthy place for democracy. That’s not going to happen,” she said. “But we can all agree, the status quo cannot continue.”



Government Technology is a sister site to Governing. Both are divisions of e.Republic.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.