[link removed]
[link removed]
In this edition:
* Why fact-checking works
* Minnesota launches a public page to counter federal misinformation after two fatal shootings in Minneapolis
* IFCN awards $750,000 in SUSTAIN grants
* IFCN responds to U.S. visa restrictions
* New research on fact-checking requests to AI bots on X
(Photo/ USA Today)
** Let's talk (again) about why fact-checking works
------------------------------------------------------------
By Angie D. Holan (mailto:
[email protected]?subject=&body=)
At a recent Northwestern University conference ([link removed]) on disinformation, several speakers argued that fact-checking “doesn’t work” — that it can't scale online, and that platforms don't care. These comments are spreading even among people who care about truth and accuracy.
That’s frankly alarming. Fact-checking has enough battles to fight without taking friendly fire.
So let me say this plainly: We need to talk, because fact-checking works.
First, let's talk about what “doesn't work” actually means. Fact-checking isn't designed to eliminate all false information — that's an impossible standard. If your expectation is that fact-checking will punish liars or change election outcomes, you’re setting it up to fail no matter how effective it actually is.
Instead, fact-checking gives people accurate information when they need it, and on social media it slows the spread of viral lies. Rigorous ([link removed]) communications ([link removed]) research ([link removed]) has shown repeatedly that debunking and media literacy reminders do work to keep people better informed.
And this approach has been deployed at scale. For years, Meta distributed fact checks to users who encountered false content across millions of interactions, and algorithmically attached fact-checking to additional similar claims. This program worked not by suppressing information, but by interrupting viral sharing. A new study ([link removed]) shows that when users saw a fact check attached to content they were about to share, sharing rates dropped. Some users even went back and purposefully deleted their posts. False claims still circulated, but they didn't go viral at the same rates. This program continues today, everywhere but the United States.
We can’t fully defend these programs, though, because they’re not very transparent, and that’s a problem. People couldn't tell that there were millions of fact-check interventions happening behind the scenes. They only saw that false information still existed, and concluded fact-checking failed. But reducing viral spread by 20, 30, 40% in specific interventions isn't failure.
Today, these programs are being dismantled, not because they didn't work, but because powerful actors decided they didn't want them to work.
We also can’t afford to say fact-checking “doesn't work” when what we mean is “there's still too much profit in fraud and political lying” or “it's not the only answer” or “fact-checking needs more effective distribution.” Those are problems with implementation and political will, not fundamental flaws in the idea of correcting false information. When we collapse those distinctions, we give ammunition to those who want to justify abandoning fact-checking entirely. That abandonment serves people with money, power or political goals, not the public. Precision matters here.
This precision matters because accepting the premise that fact-checking is futile can lead to its defunding and abandonment. We make it easier for platforms to walk away and for governments to disinvest. We increase the public’s feelings of powerlessness and frustration. Acknowledging challenges is necessary. Declaring defeat is not.
Fact-checking is one essential tool for creating an information environment where everyone has ready access to high-quality information, in order to make informed decisions about their lives. That goal remains worth fighting for. But achieving it requires us to be precise about what the real problems are — and to be honest about who is working toward that goal and who has decided to work against it.
I’m not surprised that people who profit from weakening content moderation say fact-checking doesn’t work. But when it comes from people who actually care about truth and information integrity, it’s time to lovingly correct these friends and tell them why they're wrong.
** IFCN awards $750,000 in SUSTAIN grants to 25 fact-checkers
------------------------------------------------------------
By Enock Nyariki (mailto:
[email protected]?subject=&body=)
Readers of this newsletter know the pressure on fact-checkers. Platform partnerships are narrowing. Major funders are pulling back. Political hostility is rising in the United States and elsewhere, with some governments and political actors working to discredit fact-checkers and researchers by falsely portraying verification work as partisan censorship. At the same time, false claims move faster than ever, and the need for evidence-based journalism remains high.
That pressure is exactly what SUSTAIN is meant to address. This month, the International Fact-Checking Network awarded $750,000 in flexible business sustainability grants to 25 fact-checking organizations in the first round of SUSTAIN. The $30,000 grants are meant to help newsrooms keep publishing while they stabilize operations, retain key staff, and buy time to work on longer-term revenue plans. Unlike earlier Global Fact Check Fund rounds that focused on specific projects, SUSTAIN is deliberately aimed at core operations, with lighter reporting requirements to reduce administrative burden in busy newsrooms.
The recipients include verified signatories working in and around Belarus and Russia, organizations operating in economically volatile environments such as Venezuela, and teams building verification capacity in countries including Nigeria, Kosovo, Iraq, and the Philippines. In total, the fund received 51 eligible applications and supported 25.
A second SUSTAIN round opens in mid-February. Organizations that were not selected in the first round are encouraged to strengthen their applications and reapply. Eligibility remains limited to verified signatories of the IFCN Code of Principles.
See the full announcement ([link removed]) for the list of recipients.
ON OUR RADAR
* The International Fact-Checking Network issued a statement ([link removed]) opposing the Trump administration’s decision to deny U.S. visas to European public officials and civil society leaders, calling the move censorship and warning it reflects rising repression. The IFCN said portraying fact-checkers, researchers and trust and safety professionals as national security threats echoes tactics used by authoritarian regimes and targets people for civic participation under laws passed by democratic governments. It said the policy could make it harder for fact-checking journalists to operate without fear of political retaliation and called for an immediate reversal of the restrictions.
* Minnesota’s Department of Corrections has launched ([link removed]) a “Combatting DHS Misinformation” page after two shootings by U.S. immigration agents in Minneapolis and what the state calls a pattern of false federal claims. After ICE killed Renee Good and Border Patrol agents killed ICU nurse Alex Pretti, the agency said federal officials misstated who they were targeting, invented or inflated criminal histories, and exaggerated how many people Minnesota holds for ICE. The state says those claims are being used to justify a broader enforcement crackdown and is now publishing corrections in public.
* A paper in Science ([link removed]) warns that “malicious AI swarms ([link removed]) ,” combining large language models with autonomous agents, could run coordinated influence campaigns that mimic human behavior, infiltrate online communities and fabricate the appearance of consensus at scale. The authors say a single operator could control thousands of adaptive accounts, making such campaigns harder to detect than today’s bot networks and potentially capable of shaping public opinion and elections. They argue current platform defenses and governance systems are not built to handle this kind of threat.
* A working paper ([link removed]) analyzing 1.67 million English-language fact-checking requests to Grok and Perplexity on X between February and September 2025 finds that such requests made up 7.6% of all bot interactions and focused heavily on politics, economics, and current events. The study, by researchers at Université Paris-Saclay, Oxford, and Cornell, documents partisan splits in model use and finds that Grok and Perplexity agreed on claim accuracy 52.6% of the time while strongly disagreeing 13.6% of the time; in a 100-post sample, both bots matched human fact-checkers less often than professional fact-checkers matched each other.
* Another working paper
([link removed]
5xtN%2B2T%2FaXBfVgH1Zh%2FfBnSsT8G5dDuoViXnCdb9u9t2tR6d10kt9mLrmu3xRV3DP9fIVdgnszoNowRD9HHAw6OZg6ND7Fv2gEFaARi5YYWFBHeSIRylh3X%2BTnXAqjO%2BW9lSI7qhPslw1SlRxgF5CDKJ1XPFzBO%2BN%2Fc2iNKyrNjNvrYHMKCn05TtvHKFI8VUVtma9CSYiaFSc5elI3pCaVuYnoqJ6hbOwGqOc4Rss0o0w0MTqywY6sQHe%2Bkp0p2M%2Fao4dIXoZFnG9ckw3YIZ3EFToAFfxJ0l2gRaE%2ByLRgHfJ%2B1WjMMJ3klfzH1FAlNupYIb%2BcHvxcjR7bA5muQJ4PQj%2FgCmWkGLNAuXT9PK40M%2BfWH4%2FW4qWWLIIrt%2BVozKd2C8tb3UPnNvmhF9i5avpum0kl8Bg7vHqmxEcDM20KRKMmqXiYOxZQSWkk%2BVWRoOYsXiOHIMg6JboU7HemU00G0lrr7ePkZoRz%2BI%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20260129T003424Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAUPUUPRWEUODFPN4R%2F20260129%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=b5730789d9aeeb907f1276070fad4e982e08c459df2b6693c32deb6f2cf594ac&abstractId=5868423) , based on an 18-month partnership with AFP Factuel, finds that fact-checking reduced the circulation of misinformation on Facebook by about 8%, an effect driven entirely by stories rated as
false. The study, by researchers at Sciences Po and HEC Liège, tracked 944 stories discussed at daily AFP editorial meetings between December 2021 and June 2023, comparing those that were fact-checked against similar stories that were considered but set aside. Fact checks also more than doubled the rate at which users deleted flagged posts and made those users less likely to share misinformation afterward.
* A global survey of 280 news executives in 51 countries finds confidence in journalism at an all-time low, with only 38% saying they are optimistic about the field’s prospects, down 22 percentage points from four years ago. The Reuters Institute’s annual trends report ([link removed]) says publishers expect search traffic to fall by more than 40% over the next 3 years as AI “answer engines” replace traditional search, pushing newsrooms to shift resources toward original reporting and video and away from service journalism and evergreen content they expect AI to commoditize. Executives also say they plan to put more effort into YouTube and AI platforms, and less into X and Facebook.
Have ideas or suggestions for the next issue of Factually? Email us at
[email protected] (mailto:
[email protected]?subject=&body=) .
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
© All rights reserved Poynter Institute 2026
801 Third Street South, St. Petersburg, FL 33701
Was this email forwarded to you? Subscribe to our newsletters ([link removed]) .
If you don't want to receive email updates from Poynter, we understand.
You can change your subscription preferences ([link removed]) or unsubscribe from all Poynter emails ([link removed]) .