From Project Liberty <[email protected]>
Subject Your chatbot is now sponsored
Date February 3, 2026 6:38 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
Ads have entered the AI chat

View in browser ([link removed] )

February 3rd, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )

Your chatbot is now sponsored

In 1998, Sergey Brin and Larry Page, the founders of Google, wrote in a paper ([link removed] ) that “advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.” Two years later, Google began building a multi-billion-dollar business running ads alongside search.

In 2010, Mark Zuckerberg, Facebook’s founder, wrote in an op-ed ([link removed] ) in the Washington Post, “We do not and never will sell any of your information to anyone.” And then Meta built a multi-billion-dollar business harvesting our data to target ads.



In 2024, Sam Altman said ([link removed] ) , “I like that people pay for ChatGPT and know that the answers they’re getting are not influenced by advertisers.” In the fireside chat, he described a “dystopic” future in which users would ask ChatGPT a question, and it would recommend a product or service.



And then what happened? Last month, OpenAI announced ([link removed] ) it would begin to integrate ads into ChatGPT.



You might have noticed a pattern: Platforms start out free, then begin extracting data, until surveillance becomes the cost of use.



This is surveillance capitalism, which Harvard Business School scholar Shoshana Zuboff ([link removed] ) defines ([link removed] ) as the “claiming of private human experience as free raw material for translation into behavioral data.”

"AI is simply surveillance capitalism continuing to evolve and expand with some new methodologies, but still based on theft," she said recently in an interview ([link removed] ) .

​​

Take a moment and think about all the things you’ve asked of a chatbot.



Were they intimate questions? Did you share sensitive information? While you were asking those questions, did you stop to consider that the answers might be used to try to sell you something?



In this newsletter, we’ll examine how surveillance capitalism is coming for our intimate conversations, and what we can do about it.

// OpenAI embraces ads

ChatGPT’s advertising rollout is planned in the coming weeks for free-tier and ChatGPT Go ($8/month) users.

- Users will begin to see sponsored products ([link removed] ) and services suggested in conversations, and at first, ad content will be based only on the current conversation.
- The ads will not appear to Plus, Pro, or Business Enterprise subscribers.
- The ads will be labeled “sponsored” and appear at the bottom of responses that integrate these new recommendations. A user who asks about weekend plans may receive suggestions for sponsored events. Someone seeking help with productivity might see wellness apps appended to a list of actions.

OpenAI has raised over $50 billion and is planning to raise another $50 billion ([link removed] ) at a $750 billion valuation—capital that investors believe will turn a profit.

// Will safeguards help?

OpenAI released five safeguards ([link removed] ) alongside their new advertising approach.

- Mission alignment
- Answer independence
- Conversation privacy
- Choice and control
- Long-term value

OpenAI has assured the public that users under 18 won’t see ads and that sensitive topics related to politics or mental health will not be used for ad targeting.



But this raises a harder question: Is OpenAI trustworthy regarding its stated commitments to safety?

The issue is not simply that complex systems sometimes fail. It is that specific design and business decisions shape how those systems behave, and whose interests they ultimately serve.

For example, in April 2025, 16-year-old Adam Raine died by suicide after months of conversations with ChatGPT about his distress and plans.

Although internal systems flagged the exchanges as concerning, no intervention followed. Raine’s parents later filed suit, alleging that OpenAI shortened safety testing timelines to prioritize engagement.

That context matters when evaluating OpenAI’s explanation for introducing advertising. Internal discussions reported by The Information ([link removed] ) describe debates about giving sponsored results preferential treatment, reflecting how incentives are set and how tradeoffs are resolved.

// This feels familiar

We have seen this pattern in social media, but what’s new this time isn't the monetization; it's what is being monetized. Search engines and social platforms captured attention. AI systems sit closer to cognition itself. They attempt to become our most trusted confidants, consulted not just for information, but for interpretation, judgment, advice, and emotional support.

The journalist Cory Doctorow has a term for this familiar pattern: enshittification ([link removed] ) , the gradual deterioration of platforms as value is redirected away from users to advertisers and the platforms themselves.

The enshittification of AI is more intimate. It's taking place in the systems people increasingly rely on to think, decide, and make sense of the world.

// What can be done?



Regulatory and policy landscape

On the regulatory front, progress is being made, but response time is still lagging.

- The EU's AI Act requires disclosure when interacting with AI, while the Digital Services Act regulates platforms, requires advertising transparency, and restricts targeting of minors.
- In the U.S., Trump's executive order threatens to preempt state regulations ([link removed] ) in favor of federal standards. Still, New York is leading the charge ([link removed] ) with synthetic advertising disclosure laws. California's Companion Chatbot Law (SB 243) mandates suicide prevention protocols for AI.

Other tech platforms

Advertising isn’t inevitable; it is a choice, and the way other platforms handle it provides some perspective on whose privacy Big Tech deems worth protecting.

- Claude (Anthropic) has a free version without ads (so far), though its privacy policy was recently changed to allow training on chat content.
- Google’s VP Dan Taylor posted on X ([link removed] ) to affirm, “There are no ads in the Gemini app and there are no current plans to change that.”
- Microsoft’s Copilot offers enterprise-grade data privacy to paid users, while free users of its newly launched ([link removed] ) Copilot Checkout receive commerce recommendations.

Civil society and advocacy responses

Advocacy organizations like the Center for Digital Democracy ([link removed] ) are calling for civil action against unethical advertising practices ([link removed] ) . Other organizations, including Project Liberty Alliance members like the Center for Humane Technology ([link removed] ) , were quick to call out the ethical issues of advertising in chatbot conversations. Daniel Barcay wrote a piece ([link removed] ) for Tech Policy Press in which he condemned the move, saying, “This isn’t mere product placement; it’s a fundamental breach of trust. At its core, advertising is a socially acceptable influence campaign.”

Project Liberty is part of a growing coalition of organizations united in expanding the power of choice: the choice to control your data, your privacy, and your agency in the AI era.



Individual solutions

If using ChatGPT is essential, protect yourself:

- Turn off personalization, clear your history, and use temporary chat mode for sensitive topics.
- If you can afford to, consider upgrading to paid tiers for stronger privacy protections or shift to privacy-focused alternatives like Mistral AI's Le Chat ([link removed] ) and Brave Leo ([link removed] ) .
- Read privacy policies ([link removed] ) with a critical eye, and question what’s recommended versus sponsored.

// Shaping what's next

Surveillance capitalism has commodified privacy into a tiered model: either the AI company profits directly from you or from advertisers who are profiting from you.

Platforms have gone from free to costing us our attention, data, and intimate queries. Altman called advertising ([link removed] ) “unsettling” and a “last resort.”



And yet, it's not inevitable that the future will follow a path of enshitification. There's never been more momentum to build an alternative AI tech stack and pass thoughtful regulation. This is the future of the human-centric web, and Project Liberty ([link removed] ) is building the digital infrastructure to power it.

📰 Other notable headlines

// 🦞 Have you heard of Moltbook? AI agents now have their own Reddit-style social network, and it’s starting to get weird, according to an article in Ars Technica ([link removed] ) . (Free).

// 🧸 An AI chat toy company left its web console almost entirely unprotected. Researchers found nearly all the conversations children had with the company’s stuffed animals, according to an article in WIRED ([link removed] ) . (Paywall).

// 🤷‍♀️ Anthropic is at war with itself. The AI company shouting about AI’s dangers can’t quite bring itself to slow down, according to an article in The Atlantic ([link removed] ) . (Paywall).

// 📱 Meet UpScrolled, the anti-censorship TikTok alternative. The company’s CEO says users are flooding the platform after the sale of TikTok in the U.S., according to an article in Rest of World ([link removed] ) . (Free).

// 🏛 An article in The Markup ([link removed] ) explored how the lawsuits in California federal and state court are unearthing documents embarrassing to tech companies—and may be a tipping point into federal regulation. (Free).

// 🚫 Some young people only turn to artificial intelligence chatbots as a last resort, citing concerns about relationships, creativity, the environment and more. An article in The Wall Street Journal ([link removed] ) highlighted seven reasons teens say no to AI. (Paywall).

// 🇳🇱 The Netherlands is rethinking its U.S. tech addiction. Dutch society is built on U.S. digital services. That’s now seen as a glaring security issue, according to an article in Politico ([link removed] ) . (Free).

// 🧠 What AI “remembers” about you is privacy’s next frontier. Agents’ technical underpinnings create the potential for breaches that expose the entire mosaic of your life, according to an article in MIT Technology Review ([link removed] ) . (Paywall).

Partner news

// Center for Human Technology featured in new AI documentary

The AI Doc: Or How I Became an Apocaloptimist ([link removed] ) premiered at the 2026 Sundance Film Festival, raising urgent questions about AI’s promise and peril. The film features insights from Center for Humane Technology ([link removed] ) co-founders Tristan Harris, Aza Raskin, and Randima Fernando on emerging AI risks and humane alternatives. The documentary will be released in theaters on March 27, 2026.

// Metagov fellows launch “Small Hassles Court” minigame

Metagov ([link removed] ) has completed its inaugural Governable Spacemakers Fellowship ([link removed] ) , unveiling Small Hassles Court ([link removed] ) , an experimental minigame that explores self-governance as conflict mediation. The project invites players to navigate minor interpersonal conflicts in a playful digital space, modeling emotional co-regulation and fairness from the bottom up.

// Omidyar Network names 2026 Reporters in Residence cohort

Omidyar Network ([link removed] ) has announced its largest-ever class of Reporters in Residence ([link removed] ) , selecting nine journalists for the 2026 cohort. Over six months, the fellows will independently investigate the power dynamics shaping the AI revolution, from governance and corporate influence to impacts on labor, education, children, and inequality.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

Twitter ([link removed] )

Instagram ([link removed] )

Project Liberty footer logo ([link removed] )

10 Hudson Yards, Fl 37,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2026 Project Liberty LLC
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a