Crowdsourced fact-checking fights misinformation in Taiwan
By Patricia Waldron
As journalists and professional fact-checkers struggle to keep up with the deluge of misinformation online, fact-checking sites that rely on loosely coordinated contributions from volunteers, such as Wikipedia, can help fill the gaps, Cornell research finds.
In a new study, Andy Zhao, a doctoral candidate in information science based at Cornell Tech, compared professional fact-checking articles to posts on Cofacts, a community-sourced fact-checking platform in Taiwan. He found that the crowdsourced site often responded to queries more rapidly than professionals and handled a different range of issues across platforms.
“Fact-checking is a core component of being able to use our information ecosystem in a way that supports trustworthy information,” said senior author Mor Naaman, professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science. “Places of knowledge production, like Wikipedia and Cofacts, have proved so far to be the most robust to misinformation campaigns.”
The study, “Insights from a Comparative Study on the Variety, Velocity, Veracity, and Viability of Crowdsourced and Professional Fact-Checking Services,” published Sept. 21 in the Journal of Online Trust and Safety.
The researchers focused on Cofacts because it is a crowdsourced fact-checking model that had not been well-studied. The Taiwanese government, civil organizations and the tech community established Cofacts in 2017 to address the challenges of both malicious and innocent misinformation – partially in response to efforts from the Chinese government to use disinformation to create a more pro-China public opinion in Taiwan. Much like Wikipedia, anyone on Cofacts can be an editor and post answers, submit questions and up or downvote responses. Cofacts also has a bot that fact-checks claims in a popular messaging app.
Starting with more than 60,000 crowdsourced fact-checks and 2,641 professional fact-checks, Zhao used natural language processing to match up responses posted on Cofacts with articles addressing the same questions on two professional fact-checking sites. He looked at how quickly the sites posted responses to queries, the accuracy and persuasiveness of the responses and the range of topics covered.
He found the Cofacts users often responded faster than journalists, but mostly because they could “stand on the shoulders of giants” and repurpose existing articles from professionals. In this way, Cofacts acts as a distributor for information. “They carry those stories across language, across the nation, or across time, to this exact moment to answer people's questions,” Zhao said.
Importantly, Zhao found that the Cofacts posts were just as accurate as the professional sources. And according to seven native Taiwanese graduate students who acted as raters, articles by journalists were more persuasive, but Cofacts posts often were clearer.
Further analysis showed the crowdsourced site covered a slightly different range of topics compared with those addressed by professionals. Posts on Cofacts were more likely to address recent and local issues – such as regional politics and small-time scams – while journalists were more likely to write about topics requiring expertise, including health claims and international affairs.
“We can leverage the power of the crowds to counter misinformation,” Zhao concluded. “Misinformation comes from everywhere, and we need this battle to happen in all corners.”
The need for fact-checking is likely to continue to grow. While it’s not yet clear how generative artificial intelligence (AI) models, such as ChatGPT or Midjourney, will impact the information landscape, Naaman and Zhao said it is possible that AI programs that generate text and fake images may make it even easier to create and spread misinformation online.
However, despite the success of Cofacts in Taiwan, Zhao and Naaman caution that the same approach may not transfer to other countries. “Cofacts has built on the user habits, the cultures, the background, and political and social structures of Taiwan, which is how they succeed,” Zhao said.
But understanding Cofacts’ success may assist in the design of other fact-checking systems, especially in regions that don’t speak English, which have access to few, if any fact-checking resources.
“Understanding how well that kind of model works in different settings could hopefully provide some inspiration and guidelines to people who want to execute similar endeavors in other places,” Naaman said.
The study received partial support from the National Science Foundation.
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe