From Project Liberty <[email protected]>
Subject Who controls your health data in the age of AI?
Date September 23, 2025 3:09 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
AI puts your sensitive health data at risk

View in browser ([link removed] )

September 23rd, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )

Who controls your health data in the age of AI?

In October 2020, a Finnish college student named Jere received an email on his phone ([link removed] ) .



“If we receive €200 worth of Bitcoin within 24 hours, your information will be permanently deleted from our servers.”



What happened next would become a blueprint for the health data privacy threats we all face in the AI era.



Right away, Jere knew what was going on; he was being extorted by “ransom_man”, the attacker behind a data breach at a mental health treatment center in Finland called Vastaamo. A security flaw in the company’s IT systems had exposed its entire patient database, and “ransom_man” wanted patients, who had shared personal mental health struggles and vulnerabilities with Vastaamo, to pay up. If Jere didn’t pay, the attacker promised that “your information will be published for all to see.”



Jere wasn’t the only one who received this email; 30,000 other patients also received ransom demands. Many were in a panic. At the outset of the extortion campaign, desperate patients made nearly 30 payments to the hacker.



Years later, victims of the attack suffered ongoing anxiety and trauma from this data breach. It also sparked a national conversation in Finland about mental health and the growing risk of data breaches in healthcare.



Data breaches, such as the Vastaamo incident, are becoming increasingly common. They foreshadow a perfect storm now forming, where AI amplifies the vulnerabilities of health data privacy.



In this newsletter, we explore the intersection of AI and health data, the privacy risks associated with “body-focused” technologies, and the challenges that arise when powerful technologies, weak security, and sensitive data collide.

// The perfect storm for sensitive health data

The Vastaamo attack was devastating, but if it were to happen with today’s available AI technology, the consequences could be even worse; the amount of health data is growing, data security often remains weak, and AI tools can combine disconnected data points to draw startlingly accurate conclusions about an individual’s health.



First, more sensitive health data exists today than ever before.

While estimates differ, experts believe health-related data is growing at approximately 30% per year ([link removed] ) , making up between 11%-14% of all data globally ([link removed] ) .

The wearable technology market is exploding ([link removed] ) , genomic datasets are expanding ([link removed] ?) , and electronic health records are becoming ubiquitous. With more data available, there’s increased vulnerability. The genealogy site 23andMe has DNA samples from 15 million people ([link removed] ) , enabling it to draw conclusions about millions more. Researchers estimate ([link removed] ) that a database of just 2% of the U.S. population, or six million people, is sufficient to identify the source of nearly any crime-scene DNA.

Second, that data is not secure.

The number of health-related online data breaches has skyrocketed ([link removed] ) —from 18 in 2008 to 734 in 2024—a year when hackers stole the data of over 275 million people in the U.S. alone. Once stolen, health data gets bought and sold through middlemen called data brokers ([link removed] ) .

- Research from Duke University in 2023 ([link removed] ) found that the data broker industry lacks necessary data security measures to handle sensitive health (and mental health) data. In fact, the Duke researchers discovered that many data brokers openly advertise that the data they’re selling contains sensitive information about individuals’ experiences with depression, insomnia, anxiety, ADHD, and bipolar disorder.
- An investigation by The Markup and CalMatters ([link removed] ) found that state-run healthcare websites across the country have been quietly sending visitors’ sensitive health information and medical histories to platforms like Google, LinkedIn, and Snapchat.
- The bankruptcy of 23andMe ([link removed] ) earlier this year was a wake-up call for many concerned about health data privacy. Genetic testing companies are not governed by existing health data laws, such as the Health Insurance Portability and Accountability Act (HIPAA). Sensitive health data is often protected by a company’s terms of service (as is the case with 23andMe).

Third, powerful AI tools increase exposure.

AI is blurring the distinction between ordinary data and Protected Health Information (PHI), a HIPAA term defined as any health information that can be used to identify an individual. Today, the computational power of LLMs can link personal data and PHI.

“Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands,” wrote Daniel Solove, a professor at George Washington University who focuses on privacy, in a 2023 paper ([link removed] ) . “With Big Data and powerful machine learning algorithms, most nonsensitive data give rise to inferences about sensitive data.”

- For example, immersive technologies like Augmented Reality and Virtual Reality have the ability to measure pupil dilation. With sophisticated AI tools, those measurements in eye tracking can indicate ([link removed] ) a user’s possible sexual orientation and whether they may have a propensity for developing illnesses like dementia. In another example, AI can analyze shopping habits ([link removed] ) to infer whether a shopper is pregnant.
- AI tools can also circumvent traditional anonymization and de-identification techniques, increasing the risk that anonymized data can be re-linked to individuals. This enables AI to navigate around HIPAA laws ([link removed] ) , which don’t apply to data stripped of identifiers. By being able to reidentify anonymous data, AI tools can exploit an antiquated set of health data privacy laws that weren’t designed for today’s computational capabilities.

// Solutions for the AI era

These intersecting challenges amplify data privacy risks, but they also serve as a motivating force to leverage technology, pass legislation, and change our relationship with data.



Technologists, policymakers, and privacy advocates are mobilizing on three fronts:

- Harnessing technology

Just as AI technology has the power to reidentify data that has been anonymized, it also contains the power to deidentify that same data. Privacy-enhancing technologies ([link removed] ) (PETs) like “federated learning” allow multiple parties to train machine learning models collaboratively without sharing sensitive data. OpenMined ([link removed] ) , a nonprofit, is focused on protecting the vast majority of private data, like health data, that isn’t online. Instead of centralizing this data so AI tools can scrape it, OpenMined is building a network where “data stays where it lives, and privacy-enhancing technologies ensure only the right questions are answered, by the right people, for the right reasons.”

- Passing legislation

In 2023, Washington state passed the My Health, My Data Act ([link removed] ) , the nation’s first privacy-focused law to protect personal health data that falls outside of HIPAA. Since then, other states ([link removed] ) , including Nevada, Connecticut, Maryland, Texas, and New York, are pursuing or have passed similar legislation. Europe is also advancing more stringent privacy requirements for sensitive health data, which is classified as "special category data" under GDPR. Earlier this year, the European Health Data Space Regulation ([link removed] ) was enacted, strengthening privacy and establishing a common framework for the use and exchange of electronic health data across the EU.

- Bringing a human rights approach to data privacy

Last year, the Mozilla Foundation released a report titled “From Skin to Screen: Bodily Integrity in the Digital Age” ([link removed] ) . It outlined recommendations for policymakers, tech representatives, civil society, and individuals about how to ensure the privacy of medical data in the age of AI.

It also introduced a concept called “databody integrity,” defined as “the inviolability of individuals’ online personas and their right to control the handling of data that reflects their unique physiological and psychological characteristics.”



To integrate the concept of databody integrity into policy and technology, the report recommends:

- Expand our definition of what constitutes sensitive or special category data.
- Broaden the scope and applicability of HIPAA and other existing regulations to more effectively regulate emerging technologies.
- Prioritize privacy-enhancing technologies to bolster data security.

Invest in decentralized data governance models.

// Concrete steps for health data privacy

Here are concrete steps anyone can take to protect their health data.

- Think twice before sharing. Don’t enter personal medical details into AI chatbots—they aren’t HIPAA-compliant ([link removed] ) .
- Clean up your devices. Use encrypted health apps, block location tracking, and delete what you don’t use ([link removed] ) .
- Choose privacy-by-default tools. Favor platforms that minimize data collection ([link removed] ) and make consent clear.
- Encrypt, minimize, oversee. Share only what’s necessary, keep it encrypted ([link removed] ) , and stick with services that audit their systems.
- Know and use your rights. Under HIPAA (and some state laws), you can access, correct, or even delete your health data ([link removed] ) .

// The need to own our data

Public sentiment may be the real tipping point. A 2024 Project Liberty Institute report ([link removed] ) found that most Americans don’t feel they are in control of their data and strongly support more rights and protections.

The deeper issue is not just how health data is collected or leaked, but who ultimately controls it. In the age of AI, consent cannot be a one-time checkbox—it must be a living system that allows people to decide, revoke, and track how their most sensitive information is used. Health data is unlike any other category of information: intimate, predictive, and increasingly impossible to anonymize fully. As AI systems grow more powerful, the only durable safeguard is individual ownership.

As AI systems advance, one of the most reliable safeguards is ensuring individuals retain control over their information, which is why Project Liberty is building infrastructure that shifts control back to people. The goal is a future in which health data can be shared securely when needed and protected when not.

📰 Other notable headlines

// ✍️ With the em dash, AI embraces a fading tradition. The debate about ChatGPT’s use of the em dash signifies a shift in not only how we write, but what writing is for, according to an article in The New York Times ([link removed] ) . (Paywall).

// 🤖 An article in MIT Technology Review ([link removed] ) explored the looming crackdown on AI companionship. The risks posed when kids form bonds with chatbots have turned AI safety from an abstract worry into a political flashpoint. (Paywall).

// 🥰 Chatbots are terrible at tough love, according to an article in Fast Company ([link removed] ) . A Stanford-led study shows AI is quick to flatter, even when users clearly screw up. (Free).

// 🇨🇳 An article in Noema Magazine ([link removed] ) asks, What if the future of AI isn’t defined by Washington or Beijing, but by improvisation elsewhere? (Free).

// 📱 An article in Tech Policy Press ([link removed] ) explored the lessons from Charlie Kirk’s death and how algorithms amplified division. (Free).

// 🇦🇺 Under the new law, all Australian social media users will need to prove their age, according to an article in The Guardian ([link removed] ) , which explored the consequences. (Free).

// 🛡 OpenAI promises parental controls for ChatGPT amid a lawsuit over the death of a teen. It committed to building age-prediction features into the chatbot and other parental controls, according to an article in CBC ([link removed] ) . (Free).

// 🚢 Undersea internet cables are critical in today’s hyperconnected world. When Africa’s internet breaks, an article in Rest of World ([link removed] ) profiled the ship that answers the call. (Free).

Partner news

// After fair use: AI and copyright

The Foundation for American Innovation ([link removed] ) has released After Fair Use: AI and Copyright ([link removed] ) , a symposium featuring essays from leading legal scholars. The collection explores the role of fair use, the risks and limits of existing frameworks, and potential paths for balancing innovation with creators’ rights.

// Mapping the responsible tech job market

All Tech Is Human ([link removed] ) has published a new report on the state of Responsible Tech careers ([link removed] ) . Drawing on survey results, job board data, and external research, the report highlights emerging skills, hiring trends, and the evolving pipeline for values-driven roles in technology.

// Celebrating 1 trillion archived web pages

Tuesday, October 7 at 10pm ET | Virtual

The Internet Archive ([link removed] ) is marking a major milestone—1 trillion web pages preserved in the Wayback Machine—with an evening of live music by the Del Sol Quartet. This interactive celebration highlights the power of collective memory and the role of digital preservation in shaping our shared history. Register to join virtually ([link removed] ) .

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

Sin título-3_Mesa de trabajo 1 ([link removed] )

Instagram ([link removed] )

Project Liberty footer logo ([link removed] )

10 Hudson Yards, Fl 37,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2025 Project Liberty LLC
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a