Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Emergency Agencies Can Manage a Storm of Misinformation

Fear and confusion in the aftermath of disasters create fertile ground for misinformation. Social media and AI can amplify it, but there are ways to weather the storm.

A member of a FEMA Urban Search and Rescue team with a search dog on flood-damaged property in Asheville, N.C.
A member of a FEMA Urban Search and Rescue team searches flood-damaged property in Asheville, N.C. Misinformation about FEMA assistance has caused some rescue workers to be met with hostility by community members.
(Mario Tama/TNS)
In Brief:

  • Misinformation can hamper disaster recovery efforts, preventing victims from accessing services and, in some cases, endangering rescue workers.
  • Advances in AI have eliminated technical barriers to creating deepfake images that misrepresent disaster impacts or response.
  • As the volume of media misinformation has grown, so has understanding of practices for managing it.


Effective communication in the aftermath of a natural disaster is inherently challenging. Misinformation makes it even more difficult. Two natural disasters that arrived in the Southeast in quick succession gained additional force by arriving in the final stages of a heated presidential campaign.

For some, the actions of the federal agency responsible for aiding disaster victims became fodder for divisive politicking. The social media channels that have made life harder for public health and election workers by amplifying misinformation about their efforts are having a similar effect on rescue workers, impeding their work and harming morale.

More than half of all Americans get their news from social media at least some of the time. Beyond this, Massachusetts Institute of Technology researchers found that the speed at which news circulates on social media and its ability to “speak first” can influence and frame coverage in other outlets.

Deanne Criswell, the administrator of the Federal Emergency Management Agency (FEMA) called the recent disruption from the spread of conspiracy theories “the worst I have ever seen.” The noise was enough of a distraction to recovery efforts that N.C. Congressman Chuck Edwards, a Republican, published a press release to “dispel outrageous rumors that have circulated online.” (FEMA has its own “rumor response” page.)

Social media’s power to mislead is well recognized — but at present, this poses immediate danger. Recovery from Helene and Milton is just getting underway. Hurricane season isn’t over yet. Artificial intelligence capabilities that have recently become available to social media users bring additional risks.

Still, there are strategies that can help state and local emergency managers stay ahead of misinformation.

AI-generated images, which no longer require technical expertise to create, have been used in social media misinformation.


Deepfakes in Seconds


Misinformation that causes people to behave incorrectly in an emergency is not the same thing as content intended to influence opinion, says Lucas Hansen, a software engineer who founded a nonprofit to foster understanding of AI. “That could be concretely disastrous and lead to lost lives,” he says.

For an extra $8 a month, X (formerly Twitter) subscribers beyond the “basic” tier gain access to the Grok AI chatbot. This enables them to generate AI images, as Hansen shows in a “Swifties for Trump” demonstration that invites visitors to make their own deepfake.

Images add power to social media content, and advances in AI tools mean anyone can now create images to support a false narrative, as has been done following recent storms. This concerns Hansen. “There are some people that just like to watch the world burn, but if it was required that they invest 12 hours of effort to learn skills and 30 minutes to produce content, that's a sufficient barrier,” he says. Now it’s a matter of seconds from prompt to fake image.

Hansen’s nonprofit, CivAI, has posted an interactive “sandbox” that gives visitors a basic introduction to deepfakes. Public officials can gain access to a greatly expanded set of demonstrations of AI-generated misinformation by request. CivAI doesn’t take positions on specific AI legislation, but Hansen thinks that bipartisan efforts to define the line between “ethically dubious” and illegal, as states have attempted with election ads, could make a difference.

Government agencies need to build social media presence and reputation over years if they hope to be seen as the authoritative source of information in a crisis, Hansen says. “Someone who seems to be too young for the job should probably be put in charge,” he says. “That person isn’t necessarily deciding what all the goals are, but they speak the language natively.”
Augusta Mayor Garnett Johnson speaking from behind a podium set up just outside the entrance to a building.
Augusta Mayor Garnett Johnson gives a press conference to update the public about the city's cleanup efforts after Tropical Storm Helene. Regular briefings such as this can set the narrative regarding emergency response, leaving less room for misinformation to gain traction.
(Mirtha Donastorg/TNS)

Setting the Narrative


Over 17 years at Vermont Emergency Management, Erica Bornemann saw the scale of disinformation increase as social media and Internet platforms multiplied. The primary impact is on disaster survivors, she says. If they come to distrust government and the assistance it offers, their recovery is more difficult.

Bornemann remains involved in frontline efforts as a vice president at AC Disaster Consulting, a firm that provides planning, personnel and communications support to jurisdictions. Emergency management offices aren’t likely to have enough in-house resources to respond to events such as Helene and Milton, Bornemann says.

“It’s already hard to go into the field and provide support to disaster survivors because it’s an emotional and trying time,” she says. “When you have threats of violence against the very people who are trying to help, it makes it even harder.”

In addition to a consistent social media presence, emergency management agencies can engage partners trusted by community members to disseminate disaster information on their channels. “You have to set the narrative and then make sure you’re relying on the right voices and outlets to amplify it,” says Bornemann. Messaging about preparedness is as important as emergency communications and can raise awareness of where to go for information in a disaster. This needs to be backed up by easily navigable information on agency websites.

Traditional media outlets are also part of this ecosystem. In a large event, if emergency managers are going more than a day without a press conference, says Bornemann, someone else is writing the story for them. Emergency management agencies can also limit confusion and misinformation by ensuring any government entities or elected officials likely to issue their own communications have correct information, including who the post-disaster lead is.
A 17-point checklist for countering disaster misinformation in social media posts.
A RAND researcher recently shared a 17-point checklist of evidence-based practices that can make social media posts more effective. (RAND)

Evidence-Based Posting


In the fog of disaster, misinformation that has a glimpse of truth can spread rapidly on social media, says RAND researcher Aaron Clark-Ginsburg. It might be shared by victims or others looking for answers as readily as by those with malicious motives.

Clark-Ginsburg has just completed work on a study of effective techniques for combatting online misinformation. It’s undergoing pre-publication review, but seeing that misinformation was impeding rescue efforts, he published a checklist of 17 “evidence-based” practices for communicating on social media, pulled from his research.

“The vision is that you can have it beside you and use it to figure out how to respond when you see disinformation popping up,” he says. Images are a factor in whether a post is shared or noticed, but Clark-Ginsburg can’t say whether the AI-generated images that have accompanied disaster misinformation have caused it to gain more traction.

“I’m actually not sure if there is more dis- and misinformation right now or if we’re just observing it more,” he says. “That’s a really interesting research question.”
Carl Smith is a senior staff writer for Governing and covers a broad range of issues affecting states and localities. He can be reached at carl.smith@governing.com or on Twitter at @governingwriter.
From Our Partners