Confronting Disinformation Spreaders on Twitter Only Makes It Worse, MIT Scientists Say

This article appeared on vice.com

Of all the reply guy species, the most pernicious is the correction guy. You’ve seen him before, perhaps you’ve even been him. When someone (often a celebrity or politician) tweets bad science or a provable political lie, the correction guy is there to respond with the correct information. According to a new study conducted by researchers at MIT, being corrected online just makes the original posters more toxic and obnoxious

Basically, the new thinking is that correcting fake news, disinformation, and horrible tweets at all is bad and makes everything worse. This is a “perverse downstream consequence for debunking,” and is the exact title of MIT research published in the ‘2021 CHI Conference on Human Factors in Computing Systems.’ The core takeaway is that “being corrected by another user for posting false political news increases subsequent sharing of low quality, partisan, and toxic content.”

The MIT researchers’ work is actually a continuation of their study into the effects of social media. This recent experiment started because the team had previously discovered something interesting about how people behave online. “In a recent paper published in Nature, we found that a simple accuracy nudge—asking people to judge the accuracy of a random headline—improved the quality of the news they shared afterward (by shifting their attention towards the concept of accuracy),” David Rand, an MIT researcher and co-author of the paper told Motherboard in an email.

“In the current study, we wanted to see whether a similar effect would happen if people who shared false news were directly corrected,” he said. “Direct correction could be an even more powerful accuracy prime—or, it could backfire by making people feel defensive or focusing their attention on social factors (eg embarrassment) rather than accuracy.”

According to the study, in which researchers went undercover as reply guys, the corrections backfired. The team started by picking lies they’d correct. It chose 11 political lies that had been fact checked and thoroughly debunked by Snopes. It included a mix of liberal and conservative claims being passed around online as if they were hard truths. These included simple lies about the level of donations the Clinton Foundation received from Ukraine, a story about Donald Trump evicting a disabled veteran with a therapy dog from a Trump property, and a fake picture of Ron Jeremy hanging out with Melania Trump.

Armed with the lies they’d seen spreading around online and the articles that would help set the record straight, the team looked for people on Twitter spreading the misinformation. “We selected 2,000 of these users to include in our study, attempting to recreate as much ideological balance as possible,” the study said.

Then the researchers created “human-looking bot accounts that appeared to be white men. We kept the race and gender constant across bots to reduce noise, and we used white men since a majority of our subjects were also white men.” The researchers waited three months to give the accounts time to mature and all had more than 1,000 followers by the time they started correcting people on Twitter.

The bots did this by sending out a public reply to a user’s tweet that contained a link to the false story. The reply would always contain a polite phrase like “I’m uncertain about this article—it might not be true. I found a link on Snopes that says this headline is false,” followed by a link to the Snopes article. In all, the bots sent 1,454 corrective messages.

After the reply guy bot butted in, the researchers watched the accounts to see what they’d tweet and retweet. “What we found was that getting corrected slightly decreased the quality of the news people retweeted afterward (and had no effect on primary tweets),” Rand said. “These results are a bit discouraging—it would have been great if direct corrections caused people to clean up their act and share higher quality news! But they emphasize the social element of social media. Getting publicly corrected for sharing falsehoods is a very social experience, and it’s maybe not so surprising that this experience could focus attention on social factors.”

Getting corrected by a reply guy didn’t change the way people tweeted, but it did make them retweet more false news, lean into their own partisan slant, and use more toxic language on Twitter. Rand and the rest of the team could only speculate as to why this occurred—the best guess is the social pressure that comes from being publicly corrected—but they are not done studying the topic.

“We want to figure out what exactly are the key differences between this paper and our prior work on accuracy nudge—that is, to figure out what kinds of interventions increase versus decrease the quality of news people share,” he said. “There is no question that social media has changed the way people interact. But understanding how exactly it’s changed things is really difficult. At the very least, it’s made it possible to have dialogue (be it constructive, or not so much) with people all over the world who otherwise you would never meet or interact with.”