Viral, true tweets spread just as fast, wide and deep as viral untrue tweets, Cornell researchers have found – changing the prevailing assumption that untruths on Twitter move faster.
This finding helps illuminate why certain mitigating strategies to curb disinformation on Twitter haven’t worked, while also suggesting that the most effective strategy against fake news may begin with users.
“Few people would disagree that it would be a good thing to scrub away disinformation online,” said Jonas Juul, a postdoctoral researcher in Cornell’s Center for Applied Mathematics and co-author of “Comparing Information Diffusion Mechanisms by Matching on Cascade Size,” published Nov. 7 in the Proceedings of the National Academy of Sciences.
“This type of research is still a new area of investigation,” said Juul, whose Cornell mentors include Jon Kleinberg, the Tisch University Professor of Computer Science and Austin Benson, assistant professor of computer science, both in the Cornell Ann S. Bowers College of Computing and Information Science; and Steven Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics in the College of Arts and Sciences. “With each study, we get another piece of the puzzle we’re trying to solve, that being a better, more fair and truthful social media landscape.”
The article was co-authored by Johan Ugander of Stanford University.
Building off a landmark 2018 study of 11 years’ worth of Twitter data that suggested falsehoods spread farther, faster, deeper and more broadly than truths, Juul and Ugander honed in on the structural properties of Twitter “cascades” – a measurement of the paths viral tweets take from the original poster on down through the network via retweets.
By extension, cascades show a tweet’s general popularity; the more a tweet is shared, the bigger its cascade. Juul and Ugander examined cascades of the same size, meaning true and untrue tweets that reached about the same number of users.
What they found is that the cascades of equally shared true and untrue tweets were virtually indistinguishable from each other, to the extent that Juul and Ugander couldn’t tell which tweet was true or untrue just by comparing cascades. While it’s true that people are more prone to share falsehoods than truths online – a phenomenon that researchers have yet to explain – this latest finding tweaks the prevailing assumption that untruths moved more ravenously than truths through Twitter.
These findings have implications as Twitter and other social networks look to institute strategies to curb the spread of misinformation. If false news cascades looked differently from true news cascades, an algorithm could easily identify and flag potential misinformation.
The new finding is bad news for the prospects of such interventions. For example, flagging viral tweets that have long diffusion patterns, or deprioritizing major “hubs” in news feeds would have limited success, Juul and Ugander said, because true content is inevitably flagged and deprioritized, too.
And never mind attempting to individually address the speed, breadth and depth of false news that hits on Twitter, Juul said. The new study shows that these factors are dependent on cascade size, he said, so mitigating strategies must then primarily address cascade size – the popularity, the infectiousness of a given tweet. It turns out that the power to minimize the cascade – the tweet’s virality – lies in users’ hands.
The authors suggested that better digital literacy among users could make people are less likely to share untruths on Twitter.
This research was supported by an Army Research Office multidisciplinary university research initiative award.
Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.