Your chatbot is now sponsored
In 1998, Sergey Brin and Larry Page, the founders of Google, wrote in a paper that “advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.” Two years later, Google began building a multi-billion-dollar business running ads alongside search.
In 2010, Mark Zuckerberg, Facebook’s founder, wrote in an op-ed in the Washington Post, “We do not and never will sell any of your information to anyone.” And then Meta built a multi-billion-dollar business harvesting our data to target ads.
In 2024, Sam Altman said, “I like that people pay for ChatGPT and know that the answers they’re getting are not influenced by advertisers.” In the fireside chat, he described a “dystopic” future in which users would ask ChatGPT a question, and it would recommend a product or service.
And then what happened? Last month, OpenAI announced it would begin to integrate ads into ChatGPT.
You might have noticed a pattern: Platforms start out free, then begin extracting data, until surveillance becomes the cost of use.
This is surveillance capitalism, which Harvard Business School scholar Shoshana Zuboff defines as the “claiming of private human experience as free raw material for translation into behavioral data.”
"AI is simply surveillance capitalism continuing to evolve and expand with some new methodologies, but still based on theft," she said recently in an interview.
Take a moment and think about all the things you’ve asked of a chatbot.
Were they intimate questions? Did you share sensitive information? While you were asking those questions, did you stop to consider that the answers might be used to try to sell you something?
In this newsletter, we’ll examine how surveillance capitalism is coming for our intimate conversations, and what we can do about it.
// OpenAI embraces ads
ChatGPT’s advertising rollout is planned in the coming weeks for free-tier and ChatGPT Go ($8/month) users.
- Users will begin to see sponsored products and services suggested in conversations, and at first, ad content will be based only on the current conversation.
- The ads will not appear to Plus, Pro, or Business Enterprise subscribers.
- The ads will be labeled “sponsored” and appear at the bottom of responses that integrate these new recommendations. A user who asks about weekend plans may receive suggestions for sponsored events. Someone seeking help with productivity might see wellness apps appended to a list of actions.
OpenAI has raised over $50 billion and is planning to raise another $50 billion at a $750 billion valuation—capital that investors believe will turn a profit.
// Will safeguards help?
OpenAI released five safeguards alongside their new advertising approach.
- Mission alignment
- Answer independence
- Conversation privacy
- Choice and control
- Long-term value
OpenAI has assured the public that users under 18 won’t see ads and that sensitive topics related to politics or mental health will not be used for ad targeting.
But this raises a harder question: Is OpenAI trustworthy regarding its stated commitments to safety?
The issue is not simply that complex systems sometimes fail. It is that specific design and business decisions shape how those systems behave, and whose interests they ultimately serve.
For example, in April 2025, 16-year-old Adam Raine died by suicide after months of conversations with ChatGPT about his distress and plans.
Although internal systems flagged the exchanges as concerning, no intervention followed. Raine’s parents later filed suit, alleging that OpenAI shortened safety testing timelines to prioritize engagement.
That context matters when evaluating OpenAI’s explanation for introducing advertising. Internal discussions reported by The Information describe debates about giving sponsored results preferential treatment, reflecting how incentives are set and how tradeoffs are resolved.
// This feels familiar
We have seen this pattern in social media, but what’s new this time isn't the monetization; it's what is being monetized. Search engines and social platforms captured attention. AI systems sit closer to cognition itself. They attempt to become our most trusted confidants, consulted not just for information, but for interpretation, judgment, advice, and emotional support.
The journalist Cory Doctorow has a term for this familiar pattern: enshittification, the gradual deterioration of platforms as value is redirected away from users to advertisers and the platforms themselves.
The enshittification of AI is more intimate. It's taking place in the systems people increasingly rely on to think, decide, and make sense of the world.
// What can be done?
Regulatory and policy landscape
On the regulatory front, progress is being made, but response time is still lagging.
- The EU's AI Act requires disclosure when interacting with AI, while the Digital Services Act regulates platforms, requires advertising transparency, and restricts targeting of minors.
- In the U.S., Trump's executive order threatens to preempt state regulations in favor of federal standards. Still, New York is leading the charge with synthetic advertising disclosure laws. California's Companion Chatbot Law (SB 243) mandates suicide prevention protocols for AI.
Other tech platforms
Advertising isn’t inevitable; it is a choice, and the way other platforms handle it provides some perspective on whose privacy Big Tech deems worth protecting.
- Claude (Anthropic) has a free version without ads (so far), though its privacy policy was recently changed to allow training on chat content.
- Google’s VP Dan Taylor posted on X to affirm, “There are no ads in the Gemini app and there are no current plans to change that.”
- Microsoft’s Copilot offers enterprise-grade data privacy to paid users, while free users of its newly launched Copilot Checkout receive commerce recommendations.
Civil society and advocacy responses
Advocacy organizations like the Center for Digital Democracy are calling for civil action against unethical advertising practices. Other organizations, including Project Liberty Alliance members like the Center for Humane Technology, were quick to call out the ethical issues of advertising in chatbot conversations. Daniel Barcay wrote a piece for Tech Policy Press in which he condemned the move, saying, “This isn’t mere product placement; it’s a fundamental breach of trust. At its core, advertising is a socially acceptable influence campaign.”
Project Liberty is part of a growing coalition of organizations united in expanding the power of choice: the choice to control your data, your privacy, and your agency in the AI era.
Individual solutions
If using ChatGPT is essential, protect yourself:
- Turn off personalization, clear your history, and use temporary chat mode for sensitive topics.
- If you can afford to, consider upgrading to paid tiers for stronger privacy protections or shift to privacy-focused alternatives like Mistral AI's Le Chat and Brave Leo.
- Read privacy policies with a critical eye, and question what’s recommended versus sponsored.
// Shaping what's next
Surveillance capitalism has commodified privacy into a tiered model: either the AI company profits directly from you or from advertisers who are profiting from you.
Platforms have gone from free to costing us our attention, data, and intimate queries. Altman called advertising “unsettling” and a “last resort.”
And yet, it's not inevitable that the future will follow a path of enshitification. There's never been more momentum to build an alternative AI tech stack and pass thoughtful regulation. This is the future of the human-centric web, and Project Liberty is building the digital infrastructure to power it.