AI fact鈥慶hecking works, but mostly for progressives
As social media platforms increasingly lean on artificial intelligence to spot misinformation, new research suggests those tools don鈥檛 work equally well for everyone.

Jason Thatcher
In two large online experiments conducted in the U.S. and U.K. during the 2020 and 2022 news cycles, researchers found that AI fact鈥慶heckers generally made people less likely to believe false news more than human fact鈥慶heckers, but mainly among progressive users. Conservatives reacted about the same to AI and human fact-checking, often putting more weight on the reputation of the news source itself.听
The study shows that people鈥檚 politics strongly shape whether fact-checks鈥攈uman or AI鈥攁ctually change minds, said听Jason Thatcher, professor of information systems at the听Leeds School of Business and co-author of the paper, forthcoming in听MIS Quarterly.听
鈥淧eople that are conservative trust humans because they're predictable, they're reliable, they're familiar, whereas perhaps progressives trust the technology,鈥 Thatcher said.听
That divide, he added, may help explain why AI fact-checkers persuade some users but not others.
Human vs. AI fact-checkers
The researchers, who included听听of Northeastern University鈥檚 D鈥橝more鈥慚cKim School of Business and听听of Temple University鈥檚 Fox School of Business, set out to understand not just whether AI or human fact鈥慶heckers worked better but how people judged the source of a fact check in the first place.
鈥淲e weren't interested in which was more effective,鈥 Thatcher said. 鈥淲e were interested in how people evaluated who did the rating.鈥
To do that, the researchers conducted two online experiments involving 370 active social media users in the United States and the United Kingdom, designed to reflect how people actually come across news on social media. Participants were shown news posts designed to look like real social media content, similar to what someone might see on Facebook or Reddit.听
The posts covered polarizing, widely discussed issues where misinformation often spreads, including climate change, vaccines, immigration and taxes. Some of the news stories were false, and some were accurate, reflecting the mixed information people see online.
The researchers then varied a few key details: whether a post was fact鈥慶hecked by an AI system, a human fact鈥慶hecker or not at all, and whether the post appeared to come from a high鈥 or low鈥憆eputation source. Participants also reported whether they identified as progressive or conservative, allowing the researchers to compare how different groups responded to the same information.
After viewing each post, participants were asked how believable it seemed and whether they would talk about it, comment on it or share it. The researchers ran the same basic experiment in both countries, drawing on news content from the 2020 and 2022 news cycles, to test whether the results held up beyond a single country or political moment.
Fact鈥慶hecking isn鈥檛 just about facts
Overall, the study found that AI fact鈥慶heckers were more effective than human ones at making people less likely to believe false news, but again, primarily among progressive users. Conservatives, on the other hand, reacted about the same to AI and human fact checks and tended to rely more heavily on the reputation of the news source itself.
The research also found that fact鈥慶hecking can be more complicated when false claims come from well鈥慿nown or trusted sources, particularly when human fact鈥慶heckers are involved.
Taken together, Thatcher said, the findings point to a basic challenge in fighting misinformation: It鈥檚 not just about getting the facts right but about trust.
鈥淥ne fact鈥慶hecking system is probably not going to work for everyone,鈥 he said. 鈥淭he solution is having more than one way of providing evidence, considering the source of information and helping people reach their own conclusions.鈥
听