The AI chatbot said she was a widow. She isn’t.
Gemma Mendoza was testing Rai, Rappler’s new AI chatbot, when it made a surprising claim. Not about a politician or policy in the Philippines where she works. About her. “It said I’m a widow,” she recalled. “And I’m not.”
Mendoza talked about the challenges and opportunities for AI and fact-checking during the International Fact-Checking Network’s new webinar series GlobalFact Virtual. The discussion, which I moderated, focused on how to build tools responsibly and what role AI should play in the future of the field.
The Gemma-is-a-widow error happened when the model fused two different people with overlapping names. Mendoza’s team, which she leads as head of digital services at the newsroom founded by Nobel laureate Maria Ressa, went back to basics. They tightened entity recognition, lowered confidence thresholds, and added clearer links to source material. For now, she keeps Rai confined to a single monitored chat room and reviews every response -- a practice experts describe as keeping a human in the loop to address factual inaccuracies.
“We had to make it answer only when it’s sure,” she said.
Her caution echoed a broader theme: AI tools need boundaries. At a time when technology companies are rolling out unverified summaries across search, fact-checkers are doing the opposite. Rai draws only from Rappler’s archive, refreshes its index every 15 minutes, and points users back to original sources. It runs on GraphRAG, grounded in the newsroom’s knowledge graph. But what matters more, she said, is the patient work of measuring accuracy, tracking trust, and refining the product before it scales.
Andy Dudfield, who joined the panel from Full Fact in the U.K., where he leads AI work, had an important message for funders: support long-term teams instead of one-off projects. Too often, he said, nonprofit organizations chase a flashy launch and abandon the work of upkeep.
He also argued that fact-checkers should design with distribution to the public in mind. “Where do our fact checks need to exist?” he asked, noting the shift from Google and social feeds to large language models like ChatGPT or Gemini, where millions now seek answers.
Alex Mahadevan, director of IFCN signatory MediaWise and co-author of the Poynter Institute’s AI ethics guide, agreed. “People are craving what is essentially fact checks,” he said.
But trust isn’t guaranteed, he warned. A U.S. study Mahadevan led with the University of Minnesota found that many Americans distrust journalists’ use of AI -- even for basic tasks like data analysis. Transparency helps, he said, but can also backfire. When newsrooms disclose AI use, audiences often trust the content less.
“There’s an information vacuum,” he said. “And it’s being filled by hype and fear. We need to carve out space for realistic conversations.”
That space, panelists said, must include hard questions about the environmental cost of AI and the human toll of training large models -- especially low-paid workers in places like Kenya and Venezuela who handle content moderation and reinforcement learning tasks behind the scenes.
The panel explored three core questions: how to responsibly build AI tools inside newsrooms, how to embed fact checks into emerging platforms like chatbots and language models, and how public trust changes when newsrooms disclose their use of AI.
A recording of the session is free for IFCN signatories. Others can access it at a discounted rate.