From The International Fact-Checking Network <[email protected]>
Subject Fact-checking leaders discuss opportunities and pitfalls of AI
Date September 25, 2025 1:15 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[link removed]
[link removed]

Written by Enock Nyariki (mailto:[email protected]?subject=&body=) and edited by Angie Drobnic Holan (mailto:[email protected]?subject=&body=)

In this edition
* AI and fact-checking: What’s the future?
* Africa’s fact-checkers gather in Dakar to confront regional threats
* IFCN town hall for signatories scheduled for Oct. 9
* Meedan’s CEO retires

Panelists at IFCN’s GlobalFact Virtual on AI and fact-checking. (Photo/ Poynter)

The AI chatbot said she was a widow. She isn’t.

Gemma Mendoza was testing Rai, Rappler’s new AI chatbot, when it made a surprising claim. Not about a politician or policy in the Philippines where she works. About her. “It said I’m a widow,” she recalled. “And I’m not.”

Mendoza talked about the challenges and opportunities for AI and fact-checking during the International Fact-Checking Network’s new webinar series GlobalFact Virtual. The discussion, which I moderated, focused on how to build tools responsibly and what role AI should play in the future of the field.

The Gemma-is-a-widow error happened when the model fused two different people with overlapping names. Mendoza’s team, which she leads as head of digital services at the newsroom founded by Nobel laureate Maria Ressa, went back to basics. They tightened entity recognition, lowered confidence thresholds, and added clearer links to source material. For now, she keeps Rai confined to a single monitored chat room and reviews every response -- a practice experts describe as keeping a human in the loop to address factual inaccuracies.

“We had to make it answer only when it’s sure,” she said.

Her caution echoed a broader theme: AI tools need boundaries. At a time when technology companies are rolling out unverified summaries across search, fact-checkers are doing the opposite. Rai draws only from Rappler’s archive, refreshes its index every 15 minutes, and points users back to original sources. It runs on GraphRAG, grounded in the newsroom’s knowledge graph. But what matters more, she said, is the patient work of measuring accuracy, tracking trust, and refining the product before it scales.

Andy Dudfield, who joined the panel from Full Fact in the U.K., where he leads AI work, had an important message for funders: support long-term teams instead of one-off projects. Too often, he said, nonprofit organizations chase a flashy launch and abandon the work of upkeep.

He also argued that fact-checkers should design with distribution to the public in mind. “Where do our fact checks need to exist?” he asked, noting the shift from Google and social feeds to large language models like ChatGPT or Gemini, where millions now seek answers.

Alex Mahadevan, director of IFCN signatory MediaWise and co-author of the Poynter Institute’s AI ethics guide, agreed. “People are craving what is essentially fact checks,” he said.

But trust isn’t guaranteed, he warned. AU.S. study ([link removed]) Mahadevan led with the University of Minnesota found that many Americans distrust journalists’ use of AI -- even for basic tasks like data analysis. Transparency helps, he said, but can also backfire. When newsrooms disclose AI use, audiences often trust the content less.

“There’s an information vacuum,” he said. “And it’s being filled by hype and fear. We need to carve out space for realistic conversations.”

That space, panelists said, must include hard questions about the environmental cost of AI and the human toll of training large models -- especially low-paid workers in places like Kenya and Venezuela who handle content moderation and reinforcement learning tasks behind the scenes.

The panel explored three core questions: how to responsibly build AI tools inside newsrooms, how to embed fact checks into emerging platforms like chatbots and language models, and how public trust changes when newsrooms disclose their use of AI.

A recording of the session ([link removed]) is free for IFCN signatories. Others can access it at a discounted rate.

Support the International Fact-Checking Network ([link removed])


** Africa’s fact-checkers gather in Dakar as regional threats test information integrity
------------------------------------------------------------

(Photo: Courtesy of Africa Check)

Fact-checkers from across Africa will meet in Dakar, Senegal, on Oct. 1 and 2 for the fourth Africa Facts Summit. This is the first time the annual summit takes place in a French-speaking country, a sign of how the network has expanded since earlier editions in Ghana, Mauritius and Kenya.

The conference, organized by IFCN signatory Africa Check, brings together fact-checking journalists, researchers, and platform information integrity leads to focus on the most urgent challenges in combating online falsehoods.

Sessions will explore the impact of artificial intelligence on fact-checking, misinformation in conflict zones, and regional strategies for building trust in fragile media environments. Other panels will examine gendered disinformation, digital rights, coalition-based response models, and the future of journalism education.

Workshops and breakouts are designed to be practical. Speakers will walk through tools for spotting deepfakes, verifying audio, and tracking health misinformation through community networks. Several sessions include hands-on demonstrations of AI platforms, radio monitoring, and press cartoons used in local verification efforts.

The summit will close with the African Fact-Checking Awards gala. Three categories honor the work of a student journalist, a working journalist, and a professional fact-checker.

I’ll be speaking at the event and reporting for Factually.


** IFCN invites signatories together for virtual town hall on Oct. 9
------------------------------------------------------------

If you are a member of a signatory to the IFCN Code of Principles, you are invited to an open discussion of IFCN priorities for 2026 and beyond. This meeting is a virtual version of the popular town hall we held at GlobalFact in Rio de Janeiro in June.

In this meeting, we’ll be asking signatories to share their views on the following topics:

* How to prioritize grant-making for individual signatories: What criteria should the next phase of the Global Fact Check Fund emphasize?
* Prioritizing the importance of the Code of Principles and increasing its impact.
* Updating the Code of Principles for the future – what parts of the Code need updating? How should we approach AI?
* The future of GlobalFact: Evaluating the effectiveness of our in-person meetings.
* Increasing audience trust and defending the value of fact-checking.

Check your email for the webinar link (Subject line: September IFCN Advisory Meeting agenda + other updates), and come ready to share your views.


** Ed Bice to step down as Meedan CEO after two decades
------------------------------------------------------------

Meedan CEO Ed Bice at the International Journalism Festival.

Ed Bice, the founding CEO of Meedan, will step down in November after more than two decades shaping tools that many fact-checkers use. Meedan confirmed the transition this month and said Bice will retire from his role on Nov. 11, though he will remain on the board.

Founded in 2006, Meedan is a nonprofit that builds open-source software and programs for journalists, fact-checkers, and civil society to verify claims, monitor elections, and respond to online falsehoods. Its flagship tool, Check, is run by 48 organizations. Across its programs, Meedan supports 33 languages and operates in 46 countries.

In a personal note on LinkedIn ([link removed]) , Bice traced the origins of the work to a 2003 anti-war protest arrest in San Francisco. “I did not realize that the bus ride would last 22 years,” he wrote, “nor that it was bound for a destination that was absurdly, incomprehensibly ambitious.”

He added, “We have not yet succeeded in the larger mission of fixing the internet, but we’ve improved parts of it, and the world, during critical moments.”

Bice’s successor is Dima Saber, Meedan’s chief program officer since 2022 and former director of the Check Global initiative. Saber, a media scholar and strategist who began working with Meedan in 2013, will become executive director in November.

ON OUR RADAR
* The Canadian Medical Association says ([link removed]) health misinformation is straining care. A Leger survey of 75 clinicians finds they spend nearly a day a week countering it, and about two-thirds face weekly treatment refusals tied to online claims. Canada is seeing more measles, including Ontario’s first measles death in decades in 2024.
* NewsGuard finds ([link removed]) popular chatbots now repeat false claims in 35% of responses to news prompts, up from 18% a year ago. Inflection hit 57% and Perplexity 47%, while Claude was 10% and Gemini 17%. The bots now answer every prompt and cite the web, sometimes pulling unreliable sources.
* Euronews reports ([link removed]) a Kremlin-backed site, the “Global Fact-Checking Network,” poses as a fact-checking network and mimics IFCN’s name. Reporters Without Borders says its posts push pro-Russian narratives, including a Mariupol piece by Christelle Néant that calls housing seizures legal while omitting the occupation. The site is tied to TASS, Russia’s state news agency, and ANO Dialog, a Kremlin-linked group sanctioned by the U.S. and EU for disinformation work.
* A Boston University and Ipsos survey ([link removed]) finds 84% of Americans support protections against unauthorized AI deepfakes. Support is bipartisan -- 84% of Republicans and 90% of Democrats. Large majorities also back labeling and takedowns.
* YouTube will allow ([link removed]) creators banned for COVID-19 and election misinformation to apply for reinstatement after pressure from House Judiciary Committee Chair Jim Jordan.
* Russia is flooding Moldova’s election with AI-powered disinformation, AP reports ([link removed]) , using spoof sites like Restmedia and paid engagement farms. Police made 74 arrests after 250 raids. Watchdog Promo-Lex flagged 500 fake TikTok accounts that racked up 1.3 million views in three days.

Have ideas or suggestions for the next issue of Factually? Email us at [email protected] (mailto:[email protected]?subject=&body=) .

[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]

© All rights reserved Poynter Institute 2025
801 Third Street South, St. Petersburg, FL 33701

Was this email forwarded to you? Subscribe to our newsletters ([link removed]) .

If you don't want to receive email updates from Poynter, we understand.
You can change your subscription preferences ([link removed]) or unsubscribe from all Poynter emails ([link removed]) .
Screenshot of the email generated on import

Message Analysis