The social media trials will determine if Section 230 can save Big Tech
View in browser
logo_op-02

February 10th, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Big Tech’s tobacco moment

 

This is Big Tech’s tobacco moment. 

 

In the 1990s, the tobacco industry took a major public hit when internal documents were released showing that executives knew their products were harmful despite testifying to Congress that nicotine wasn’t addictive. 

 

Nearly three decades later, we are watching a similar story play out with Big Tech, right down to the smoking gun, as internal documents at Meta reveal they knew their platforms were addictive.

 

An historic trial began late last month in California, where a jury will decide whether social media companies can be held liable for harming minors. The lawsuit targets the platforms themselves as defective products, arguing that their design and architecture have caused harm, not the content on the platforms. 

 

This is a novel approach that bypasses Section 230, a 1996 law that has shielded Big Tech from liability.

 

In this newsletter, we analyze recent tech lawsuits and how their outcomes will shape the future of Big Tech, tech policy, and the everyday experience of internet users worldwide.

 

// Social media on trial

On January 27th, jury selection began in K.G.M v. Meta and YouTube (Google) at the Los Angeles Superior Court. This is the first in a series of bellwether trials in 2026, representing over 1,000 lawsuits from families, school districts, and state attorneys general.

 

K.G.M, the 19-year-old plaintiff from California, is seeking damages for addiction-related mental health issues. Her testimony will detail how the compulsion to use social media led to anxiety, disordered eating, and body dysmorphia.

  • For the first time, a jury, not a judge or a private settlement, will decide whether social media platforms deliberately harmed minors. 

  • K.G.M’s case is part of a larger group of plaintiffs seeking guardrails against addictive features, prominent safety warnings, and stronger age restrictions. Two more individual trials are scheduled for later this year in Los Angeles.

  • Mark Zuckerberg and Instagram’s Adam Mosseri are set to testify, and YouTube CEO Neal Mohan may also be called to the stand. Meta and Google’s defense teams had previously sought to bar expert testimony, but a judge ruled in the plaintiff’s favor. A verdict is expected in March.

  • TikTok and Snapchat were originally defendants, but both settled before the trial.

Matthew Bergman, an attorney of the Social Media Victims Law Center, a Project Liberty Alliance member, who represents K.G.M, described the stakes:

 

//

“The public is going to know for the first time what social media companies have done to prioritize their profits over the safety of our kids.”

//

 

// Further litigation

In addition to the trials in Los Angeles, a federal court in Oakland heard arguments in over 1,000 lawsuits filed by state attorneys general, school districts, and families seeking damages for harms arising from platform design. 

 

Separately, 200 school districts have sued under "public nuisance" claims, arguing that social media platforms cost them resources for counseling and mental health screening. 

 

These cases (with trials set for June) could prove successful, as school districts secured a $1.2 billion settlement against the e-cigarette maker JUUL, also under “public nuisance” claims. 

 

    // The "smoking gun" evidence 

    Recently unsealed documents illustrate the many ways in which Big Tech has knowingly centered its business model around teen addiction despite explicit harms. 

    • The most damning internal documentation features Meta’s very own research. The company launched Project Mercury in 2020, research that was eventually buried from the public because it openly admitted to the harms caused by its design choices. The research indicated that users who took breaks from social media showed declining rates of anxiety, depression, and loneliness. An employee warned that burying the results would be repeating history, “like tobacco companies doing research and knowing cigs were bad.” 

    • Internal employee conversations about Instagram addiction likened it to a drug: “oh my gosh yall IG is a drug…We’re basically pushers,” one Meta researcher wrote.

    • A 2017 email from Mark Zuckerberg confirmed that Meta’s target market was teens, and internal communications at Meta assessed that “the lifetime value of a 13 y/o teen [to be] roughly $270 per teen.” 

    • Google also targeted teens with its Shorts content. An internal slide deck described the “negative well-being effects can result from user behaviors,” and how researchers “feel that [YouTube] is built with the intention of being addictive. Designed with tricks to encourage binge-watching (i.e., autoplay, recommendations, etc.).”

    Casey Mock, Senior Policy Advisor to Jonathan Haidt and the national movement behind The Anxious Generation, wrote an article in After Babel that traces how “Meta's lawyers have perfected a playbook of deceit first developed by Big Tobacco—and why ending the era of tech impunity means holding accountable the professionals providing the shield.”

     

      // Big Tech’s defense

      ​

      With damning admissions like those from internal documents, what legal leg can Big Tech stand on? Their defense centers on a few claims:

      • There is no clinical definition of social media addiction, and while there might be a correlation, there is no definitive causal link between social media use and mental health issues.
      • Guardrails already exist, including parental controls, age verification, screen limits, and terms of use documents.
      • The features challenged in the lawsuit are editorial choices, not part of a defective product design, and Section 230 should extend to these allegations. 
      • There are other factors shaping teens' mental well-being that cannot be attributed to using these platforms.

         

      // Challenging the legal armor of Section 230

      The key differentiator from previous legal battles is the shift from content liability to product liability. 

       

      These cases argue that the platforms have manufactured defective products through addictive design choices, including infinite scroll, autoplay, push notifications, and recommendation algorithms. These product features are what have caused harm, rather than the content on the platforms. 

       

      Some courts have begun to accept the distinction. 

      • In Neville v. Snap, an ongoing lawsuit concerning fentanyl overdose deaths from drugs purchased on Snapchat, California courts ruled the platform itself could be sued as a defective product. 
      • The plaintiffs alleged “Snapchat’s design had created an online open-air drug market,” and the courts agreed that Section 230 could not protect the app’s design; it wasn’t the content that caused the harm.

       

      // The history of product liability

      Product liability emerged around the time of mass manufacturing, making manufacturers responsible for products that were harmful or defective. A case in the 1940s involving an exploding Coca-Cola bottle contributed to the establishment of this precedent. This same precedent was applied to Big Tobacco when they failed to give consumers fair warning that cigarettes were addictive and, in the long term, harmful.

       

      If the legal cases succeed, they could open the floodgates for liability beyond Section 230, forcing mandatory design changes, cigarette-pack-style warning labels (as the former U.S. Surgeon General has endorsed), stronger age restrictions, bans for certain features, and financial damages. 

       

      Big Tech can afford to pay these fines (as it has in the past with E.U. fines), but it’s not so much about the money as about the legal precedent. If Big Tech’s products are deemed to be defective, it could have profound implications for the design of social media as a whole. According to Santa Clara University law professor Eric Goldman, “If the plaintiffs win, the internet will almost certainly look different than it does today.”

       

      // Regulation vs. Litigation

      The U.S. has no federal regulation of social media and instead has enacted a patchwork of state policies. In 2025, multiple states proposed bans or restrictions on social media for minors, but courts struck down the measures on First Amendment grounds. With platforms protected from content liability under Section 230, the lack of regulation makes litigation one preferred approach in the U.S.

       

      This differs from what’s happening in Europe, where there’s a stronger political will to pass Eurozone tech policy. In Europe, the DSA restricts targeted ads to minors and requires large platforms to assess and mitigate systemic risks—including risks to minors—backed by enforcement actions that increasingly scrutinize addictive design. As we explored in a recent newsletter, policymakers in Europe and elsewhere are considering whether to follow Australia’s lead and ban social media for users under 16.

       

      // A turning point for Big Tech

      The lawsuits in California raise broader questions about tech policy strategy: Is reactive litigation an effective way to hold tech accountable? And how can litigation post-harm translate into proactive legislation that prevents harm before it occurs?

       

      The lawsuits are also part of a larger push to hold Big Tech accountable. Antitrust cases are advancing against Google, Meta, Amazon, and Apple. States have passed their own laws (see Utah’s Digital Choice Act). Grassroots organizations like ParentsTogether, Design It For Us, and the Social Media Victims Law Center are mobilizing families and pushing for legislative action.

       

      The pattern remains: A billion-dollar industry denies the addictive nature of its product, but internal documents show otherwise. It goes to litigation because there aren’t any regulations in place to protect consumers.

       

      If these trials favor product liability, the implications for platform design and Big Tech accountability are staggering. Section 230 shields platforms from user-generated content, but content isn't the issue when deliberately addictive design decisions are causing known harm. The verdicts will determine if we’ve entered the era of Big Tech’s accountability reckoning, with a jury deciding whether social media is, in fact, a drug.

      📰 Other notable headlines

      // 🦞 A journalist from WIRED infiltrated Moltbook, the AI-Only social network where humans aren’t allowed. But instead of a novel breakthrough, the AI-only site is a crude rehashing of sci-fi fantasies. (Paywall).

       

      // 🤔 The AI revolution is here. Will the economy survive the transition? Michael Burry, Dwarkesh Patel, Patrick McKenzie, and Jack Clark take the debate to Substack. (Free).

       

      // 🤖 Long-running AI agents have arrived. An article in the Wall Street Journal reported that Anthropic’s Claude Code and Cowork agents are a glimpse into the AI-driven future of work. (Paywall).

       

      // 🚫 An article in Politico reported on warnings from experts that AI chatbots are not your friends. (Free).

       

      // 🌐 There is a volunteer Wikipedia army protecting against AI slop. The editors are both populating and fighting the world’s regional language AI engines, according to an article in Rest of World. (Free).

       

      // 📄 An article in Tech Policy Press makes the case that courts are missing the fair use argument in the copyright battle over AI summaries. (Free).

       

      // 🎙 On his podcast, New York Times journalist Ezra Klein interviewed Cory Doctorow and Tim Wu to explore the state of today’s internet. It’s not one we asked for. (Free).


      // 🖥 AI is not the only threat menacing big tech. An article in The Economist asks, are Meta and Google ads recession-proof? (Paywall).

      Partner news

      // Engaging Generative AI Beyond the Hype

      February 19 | 1PM ET | Virtual

      All Tech Is Human is hosting a livestream conversation on how to engage with generative AI. Authors Maggie Engler (Microsoft AI) and Numa Dhamani (iVerify) will explore what’s changed in areas like AI agents, safety, labor, and governance, and how people can consider risks, responsibilities, and real-world impact. Register here.

       

      // Humanity & AI at Davos

      At the 2026 World Economic Forum, Future of Life Institute President Max Tegmark joined author Yuval Noah Harari for a conversation exploring the future of humanity, the governance of advanced AI systems, and how human agency can be preserved as technology reshapes global society. Watch the discussion here.

       

      // The 2025 Global Dialogues Index Report

      The Collective Intelligence Project shared findings from a year of global dialogue about how the world lives with AI in its report: The 2025 Global Dialogues Index Report. Check it out here.

      What did you think of today's newsletter?

      We'd love to hear your feedback and ideas. Reply to this email.

      // Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

       

      Thank you for reading.

      Facebook
      LinkedIn
      Twitter
      Instagram
      Project Liberty footer logo

      10 Hudson Yards, Fl 37,
      New York, New York, 10001
      Unsubscribe  Manage Preferences

      © 2026 Project Liberty LLC