Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Can We Use Technology to Weed Out Online Disinformation?

Intentional or not, untrue information propagating on the Internet threatens democratic institutions and the public good. Emerging tech tools aim to help government combat the threat.

illustration representing misinformation and disinformation
Shutterstock
Some people believe that Donald Trump really won the 2020 U.S. election. Some people believe COVID-19 is only as bad as the flu. Some people believe the president is part of a secret satanic cult that eats children.

These are all patently, provably false ideas. But the fact that many Americans believe them — to say nothing of others around the world — has become a problem.

Perhaps the most tangible example is in vaccination rates. As of this article’s writing, about one-fifth of adults say they won’t get vaccinated against COVID-19, with another 12 or 15 percent saying they will wait for more information. That’s enough people to leave the door open for more outbreaks and complicate efforts to restart the economy.

But what if technology could help uncover this type of misinformation and disinformation, giving government a way to smother it before it spreads? Or maybe even just make it disappear?

That’s the concept behind new software hitting the market for government agencies. Using artificial intelligence, they mine major social media platforms — along with their fringe competitors — as well as blogs and other websites to find misinformation and disinformation.

In a world where outlandish beliefs lead to real-life consequences, the existence of such technology might sound like a welcome relief to beleaguered public officials.

But it also presents many questions that lack clear answers. How accurate are they? What can government do with the information they provide? And in a world where the truth itself has become a political debate, who decides what counts as misinformation and disinformation?

HOW IT WORKS



Different companies offer different approaches, but the general framework is to monitor vast swaths of the Internet — mostly social media, where misinformation spreads fastest — for messages on specified subjects, and sometimes with a specified sentiment such as a negative statement about vaccines. The companies will then use some sort of ground truth, perhaps a government agency or a trusted group of experts, to see how far off the post in question is.

One might view it as a rough approximation of a person’s mental journey when scrolling through their own social media feeds, only a lot faster and with the rigidity of computer logic.

Misinformation
vs. Disinformation

Both misinformation and disinformation are untrue statements, but the difference between them lies in the intent: If the speaker is deliberately lying or misleading their audience, it’s disinformation.



The company AlphaVu specifically pointed to the Centers for Disease Control and Prevention as a source of truth to measure against. Another company,Logically, uses a variety of chosen experts.

Because they’re looking for replication of misinformation, and because government customers can specify what they’re interested in, a lot of the work can be done quickly and cleanly by looking for posts that are similar to known misinformation or disinformation.

“Without even needing to verify whether that’s misinformation or disinformation, we’re able to understand the context around content to assess whether or not it fits within that rumor profile,” Lyric Jain, CEO of Logically, said in a previous interview with Government Technology. “So those are the types of things that we would flag up, based on the messages, based on the methods that are being used to promote. And all that is automatable to a certain degree, and wherever we’re not confident, that’s where we bring in our human analysts.”

The idea is to bring the most concerning posts to the surface so that public officials can do what they will with it. AlphaVu offers tools to find which geographic locations and demographic groups problematic messages are coming from, which gives governments the ability to target their own messaging campaigns. Logically has a library of material to quickly assemble counter-messaging, and it also has tools to ask social media platforms to take certain posts down.

The idea is to do it all as fast as possible — after all, a lie can be spread to millions with a single click of the mouse.

“If the response isn’t immediate, if it seems like, at least optics-wise, there’s any uncertainty in response, or if there’s a vacuum where a narrative is allowed to go unchallenged, that’s the space where the most harm occurs,” Jain said. “And it’s really hard to convince people once they’ve been convinced of that narrative.”

illustration of a person trying not to listen to the speech bubbles around them
Shutterstock

JUDGES OF THE TRUTH



One thing about this technology that’s unavoidable: Somebody needs to decide what counts as misinformation and disinformation.

So far, that decision is being outsourced to the companies selling the technology. They are using government agencies and experts as their basis of truth, but they’re still making decisions that create the foundation for the truth.

Tara Kirk Sell, a senior scholar at the Johns Hopkins Center for Health Security who studies misinformation and disinformation, sees all sorts of potential problems with that approach and cautions that the problem is too thorny to be easily solved with automation.

“That seems problematic, even as someone who feels like science has not been followed on COVID-19,” she said. “Just because someone doesn’t agree with you, or your opinion, doesn’t mean it’s misinformation.”

Consider the shady gray area in between truth and lies. One of Sell’s research projects studied tweets during the Ebola outbreak of 2014, a project where she and her colleagues went in hoping to classify posts as either true or not true. They found many tweets that were somewhere in the middle — part true and part false, unclear, technically true but misleading, etc.

If the response isn’t immediate, if it seems like, at least optics-wise, there’s any uncertainty in response, or if there’s a vacuum where a narrative is allowed to go unchallenged, that’s the space where the most harm occurs.
“This is a huge gray area where even as trained public health coders, we had a very difficult time,” she said. “We had to make that extra category, because, you know, that is a hard area to put in one bin or the other.”

There’s also an issue of scope: A public health agency can say what shutting down means or doesn’t mean, but the policy choice of whether to shut down depends on values and priorities.

Sell believes more in education: targeting the listeners more than the speakers. If more people learn to critically evaluate the information in front of them, fact check and ask questions, she said, misinformation could just “roll right off them.”

But that would likely require a large, coordinated effort at the highest levels of government.

“We’ve got to get a lot of different stakeholders involved. And I think it requires expertise in whatever topic area you’re working on, but also some people who are approaching the ethics and the legal requirements, and just people who have a stake in these issues. I think that that’s important,” Sell said. “That’s why we call for a national strategy to combat health-related misinformation.”

PUBLIC HEALTH



In 2020, with misinformation about COVID-19 swirling and mutating, the Virginia Department of Health (VDH) turned to AlphaVu for help. The idea wasn’t to take down posts or go hunting bad actors, but something akin to a continuous, configurable, digital focus group.

“We use it for a local microcommunity influence, so to speak … particularly wherever it’s high-risk, hard-to-reach populations, this helps,” said Suresh Soundararajan, CIO of VDH. “So we get this information and we operationalize it.”

The tool gives them one score to measure sentiment and another to assess truthfulness. It’s all in support of the department’s messaging strategy.

“Week to week we have meetings on this data and what the risk is and where it is and all that stuff,” Soundararajan said. “So it helps us to message and say OK, if you see [the conversation] this week, we can message something out to the population the subsequent week.”

Since misinformation happens within a sociopolitical and historical context, it can be hard to tell whether the department’s efforts have helped. But Soundararajan sees some evidence it has: In October, the department was constantly seeing risk scores of seven and higher, on a scale of 10. By March, it was more common to see fives.

“It could be multiple factors. The general anxiety of people must have come down, that’s one thing,” he said. “I would like to think that the way we are countering misinformation with data from AlphaVu, us sending targeted messages to different social media platforms and removing this misinformation … I think that’s kind of helping.”

ELECTIONS



The other obvious application for the technology in recent years is elections, where a combination of distrust, foreign meddling and close results have collectively raised the nation’s blood pressure a notch or two.

Few have seen more of that than Maricopa County, Ariz., which drove its state to a narrow flip from red to blue in 2020 and thus helped put Joe Biden in the White House. In the days following the election, with results trickling in, supporters of Donald Trump accused officials in the state of pushing the results to Biden and protested.

Lester Godsey, chief information security officer of Maricopa County, likened the experience to working in a natural disaster response center.

“That’s the closest example I could come up to, in comparison to the 2020 election, is like, some natural disaster — because it was almost like that,” he said. “From a social media perspective, we just saw profanity-laced tirades online, we were seeing people retweeting and resharing posts that have no — from my perspective — no basis of reality.”

Maricopa County is not a client of AlphaVu or Logically, but Godsey has been outspoken about the role information security has in government’s fight against misinformation and disinformation. To him, it’s all about risk.

“We monitor social media to see what the potential risks are, but at the same time it’s also a source of intelligence for us,” Godsey said. “We go to social media to see if there’s an increased likelihood that somebody is going to launch a cyber attack against us or protest against us — again, not for purposes of censorship or to … not allow somebody constitutional rights, but rather, just to protect your organization [and] the people [who] work for it.”

The county uses social media monitoring tools, of which there are many on the market. Such tools, even though they weren’t necessarily designed for finding misinformation and disinformation, can still serve many of the same functions.

Maricopa County used it to fight back when people said that open voting locations were closed. Other incidents were more ominous — Godsey recalls one moment where his team found somebody posting about their plan to follow employees of the county recorder’s office as they went about their work counting votes.

“Somebody was using Twitter to pass that information along. We reported it to the [Arizona Counter Terrorism Information Center], and then they reached out to their FBI contacts, who then had an agent at Twitter,” he said. “And that account was disabled about an hour or so afterward.”

Information security, after all, encompasses both digital and physical risks.

But the goal is not simply taking down posts, which he considers a slippery slope to censorship. The idea is to be better informed, and therefore prepared.

“Any organization that cares about brand, or their reputation, or the trust that they may or may not have within their wider community should care about social media and should be allocating resources and monitoring how they’re coming across,” he said. “I think that’s something that translates across the board.


WILL GOVERNMENT BITE?


By all accounts, misinformation and disinformation are not going away. Aside from regular people proliferating falsehoods online, the international use of disinformation campaigns appears to be increasing.

“Back in the day, when we started [in 2017], there was just over a dozen … nation-state actors … who were conducting these activities outside of their own countries,” Jain said. “And now that number’s blown up to around 80-something, so we feel like the threat surface is growing larger.”

The question of to what extent government agencies will see that as a direct threat to their work, and seek technology to fight back, is an open one. But Sell thinks it’s likely that more government agencies will be interested in tools like the ones offered by Logically, AlphaVu and others.

“This is a growing area of concern for the government,” she said. “I think the appeal of just buying some technological system that you think will help you seems [like] something that is hard to turn down. But I think … the solution is more than just an algorithm.”



Government Technology is a sister site to Governing. Both are divisions of e.Republic.
Ben Miller is the associate editor of data and business for Government Technology. His reporting experience includes breaking news, business, community features and technical subjects. He holds a Bachelor’s degree in journalism from the Reynolds School of Journalism at the University of Nevada, Reno, and lives in Sacramento, Calif.