From Open Markets Institute <[email protected]>
Subject The Corner Newsletter: Copyright Protections and Critiquing the Abundance Agenda
Date May 23, 2025 7:51 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
Welcome to The Corner. In this issue, we explore how copyright protections, currently under threat from the Trump administration, stand as a xxxxxx against Big Tech‘s use of copyrighted material to turbocharge AI growth.

Open Markets Releases Report, Hosts Conference on Monopolization of Cloud Computing

Last week, the Open Markets Institute released a groundbreaking report [[link removed]] Engineering the Cloud Commons: A Blueprint for Resilient, Secure, and Open Digital Infrastructure calling for public utility regulation and structural separation, as well as investment in digital public infrastructure. In conjunction with the release, OMI hosted a conference [[link removed]], entitled “Engineering the Cloud Commons: Tackling Monopoly Control of Critical Digital Infrastructure,” convening leading experts to discuss these issues as well as potential solutions. Featured panelists included Vanderbilt University law professor Ganesh Sitaraman; Paris Marx, host of the Tech Won’t Save Us podcast; Amba Kak, co-executive director of the AI Now Institute, and; Trey Herr, senior director of the Cyber Statecraft Initiative at the Atlantic Council among others. The conference discussion received coverage in Global Competition Review [[link removed]]. Watch the discussion here [[link removed]] and read the report, written by OMI’s Europe director Max von Thun and EU research fellow Claire Lavin, here [[link removed]].

Journalists and Artists Lose Out to AI Corporations as Trump Fires Copyright Director

Karina Montoya

The abrupt firing of the U.S. Copyright Office Shira Perlmutter by President Trump, following the agency’s draft report [[link removed]]on copyright and generative artificial intelligence, marks a new chapter in the battle to prevent Silicon Valley from advancing an AI business model based on using copyrighted works to train their systems, without the consent of — or compensation to — their creators.

Perlmutter’s firing has sparked a variety of speculation about the motivations behind it. Some saw it as a power play led by Elon Musk [[link removed]], based both on his close relationship to Trump and his new AI business venture [[link removed]]. More recent reports [[link removed]], though, show Google and Meta also paid lobbyists to lead a campaign [[link removed]] against Perlmutter while her office prepared to issue its AI report. What’s clear is the dominant AI corporations don’t want copyright law to stop them from using other people’s work for their own private purposes.

In the draft report, the Copyright Office focused on whether AI companies should compensate copyright holders for using their works to train AI models, following a 2023 public consultation (in which Open Markets participated [[link removed]]). That question is also at the heart of more than 20 lawsuits [[link removed]] making their way through U.S. courts. The Copyright Office’s opinion is not legally binding, but courts routinely rely on such expert research to make decisions.

Google, Meta, Amazon, and Microsoft, as well as some of their AI rivals, fiercely contend that fair use should apply to the internet content and databases they use to build their AI models. They also argue that enforcing copyright law or implementing a new content licensing regime would impede “innovation” and stall progress on generative AI.

Critics of copyright enforcement for the AI market often point to how some corporations have used the law to fortify their market power. In recent decades, for instance, U.S. copyright law has often benefited dominant entertainment companies [[link removed]]rather than the original individual creators.

In the draft report, the Copyright agency said the first key question in assessing fair use of copyrighted works is what the AI model will ultimately be used for. For instance, using the copyrighted books to train an AI model to remove harmful content online is very different than use of those same books — or images or videos — to train an AI model to produce content “substantially similar to copyrighted works in the dataset.”

The agency also calls for developing a consent framework [[link removed]] beyond the opt-out standard, which is when tech companies first collect user data and later ask for permission to profit from it. Dominant AI corporations have exploited putting the onus on users to opt out of data collection as a license to gather, store, and profit from copyrighted works. When creators specifically opt out of allowing use of copyrighted materials, AI corporations may stop collection, but they can continue using previously appropriated works.

The report also warns that AI models trained on copyrighted works can hurt original creators’ property rights in a variety of ways. This includes by preventing them from licensing the use of their works to others, and by flooding the market with stylistic imitations that diminish the value of their original works.

The Copyright Office’s guidance came at a pivotal time for AI regulation around the world. In February, in Thomson Reuters v. ROSS, a U.S. federal court rejected a fair use defense [[link removed]] of copyrighted works in training AI and machine learning systems, setting a potentially important precedent [[link removed]] for other similar cases in training generative AI.

In the UK, a massive campaign [[link removed]] by news media and creators to raise awareness of the same risks the U.S. Copyright Office describes led the UK Parliament to reconsider changes [[link removed]] in legislation that would have hurt creators and journalism. Last week, the California Assembly passed the AI Copyright Transparency Act [[link removed]], a first step [[link removed]] toward transparency and accountability for the use of copyrighted works in AI model training.

In both cases, though, legislatures are still placing too much of the burden on creators to detect and challenge misuse of their works in AI. Big Tech’s growing data monopolies in AI [[link removed]] continue to pose a real and growing threat to creative industries and journalism. The time has come to complete a solid new framework that makes copyright work for the creators that it’s meant to protect in the first place.

Open Markets Legal Director Challenges Abundance Agenda Stance Against Regulation

Sandeep Vaheesan, legal director at the Open Markets Institute and author of the recent book Democracy in Power: A History of Electrification in the United States [[link removed]], offers a provocative critique in the Boston Review [[link removed]] of the 2025 book Abundance by journalists Ezra Klein and Derek Thompson, which argues that excessive public regulation has hindered progress in addressing the nation’s housing crisis and combating climate change. In his essay, Vaheesan outlines why this deregulatory vision is likely to further empower oligarchy rather than deliver broadly shared prosperity and instead proposes a a 21st-century New Deal rooted in robust public investment, corporate accountability, and democratic participation, similar to the blueprint for a publicly led and managed path to decarbonization offered in his own 2024 book Democracy in Power.

📝 WHAT WE'VE BEEN UP TO: Open Markets senior fellow Johnny Ryan scored [[link removed]] a major legal victory in his complaint against the Big Tech platforms when the Belgian Court of Appeal declared the Transparency & Consent Framework (TCF) used to obtain “consent” for data processing, i.e. pop-up windows, illegal under the EU’s General Data Protection Regulation. The foundation of much of online advertising, the TCF is live on 80% of the internet. “The court’s decision shows the consent system used by Google, Amazon, X, and Microsoft, deceives hundreds of millions of Europeans,” said Ryan, who is also a senior fellow at the Irish Council for Civil Liberties.

The Center for Journalism & Liberty at Open Markets submitted [[link removed]] a detailed public comment to the Federal Trade Commission in response to its inquiry on technology platform censorship. The comment documents how dominant digital platforms — Meta, X, and Google — routinely engage in practices that suppress journalistic content, whether through opaque algorithms, retaliatory suspensions, shadow banning, or compliance with government censorship demands. The submission urges the FTC to pursue structural reforms — including common carrier obligations, algorithmic transparency, and anti-monopoly enforcement — to ensure equal access to digital infrastructure and protect the free flow of information. Read the submission here [[link removed]].

The Open Markets Institute, along with partner organizations, submitted a response [[link removed]] to the UK Competition and Markets Authority’s (CMA) consultation on merger remedies, urging the agency to maintain strong, effective merger control. The submission warns against over-reliance on behavioral remedies and calls instead for structural solutions to address anticompetitive harms. Read the full submission here [[link removed]].

Dr. Courtney Radsch, director of the Center for Journalism & Liberty at Open Markets, condemned [[link removed]] the recently passed House budget reconciliation bill for proposing a decade-long ban on state and local regulation of artificial intelligence, calling it a “stunning assault on state sovereignty. Her statement was covered in The Hill [[link removed]] in a piece syndicated by Yahoo [[link removed]]and worldofsoftware.org [[link removed]]. Dr. Radsch then applauded [[link removed]] two other AI bills, California’s AI Copyright Transparency Act, which will force AI companies to reveal whether copyright-protected material has been used in training datasets, and abipartisan proposal in Congress, the Protecting AI and Cloud Competition in Defense Act, which echoed warnings of monopolization in the cloud market made in OMI’s Engineering the Cloud Commons report.

Open Markets legal director Sandeep Vaheesan criticized [[link removed]] the partisan Federal Trade Commission’s vote to dismiss the lawsuit against PepsiCo., a critical step toward federal revival of enforcement of the Robinson-Patman Act, which prohibits price discrimination by powerful retailers. “The partisan FTC’s decision to withdraw the lawsuit is yet another example of the Trump administration siding with the largest corporations and against independent businesses,” Vaheesan said. OMI has written extensively on the benefits of the Robinson-Patman Act, including two 2023 papers [[link removed]] that urge the government to revive enforcement of the lapsed law to build a fairer, more open, more decentralized economy.

Open Markets Institute senior reporter Karina Montoya appeared on a Tech Policy Press [[link removed]]podcast to discuss the remedies trial for Google, at which the Department of Justice proposed to break the corporation’s dominance in search through the sale of its Chrome browser. “The market reality is that Chrome really serves as a vehicle for data collection that goes far beyond search,” she said.

Center for Journalism and Liberty at Open Markets director Dr. Courtney Radsch joined the TechSequences [[link removed]] podcast to discuss how the surveillance-based business models of today’s dominant tech platforms are fundamentally incompatible with democratic values and human well-being. She discussed how Meta’s business model, as exposed in the 2025 book Careless People, prioritizes engagement over safety and democracy and called for systemic reforms to treat platforms as essential infrastructure.

Dr. Radsch recorded a video for Goodbot [[link removed]], a nonprofit advocating for strong tech governance, on Meta’s news blackout in Canada and calling for more legislation like Canada’s Online News Act (Bill C-18), which seeks to hold platforms accountable. “In a democracy, journalism is not a luxury — it’s a necessity,” she said.

Local California news outlet Davis Vanguard [[link removed]] cited an article in the Harvard Business Review [[link removed]] written last year by OMI legal director Sandeep Vaheesan and OMI chief economist Brian Callaci in which they argued that fixing the housing crisis requires confronting landlord power, rethinking deregulation, and expanding public-sector housing initiatives.

🔊 ANTI-MONOPOLY RISING:

Google agreed to pay the state of Texas nearly $1.4 billion to settle claims that the search and advertising giant systematically violated user’s privacy by not requesting necessary consent for biometrics data and location histories ( The [[link removed]] Guardian [[link removed]])

A jury found Johnson & Johnson subsidiary Biosense Webster liable for $147 million in damages in a case alleging the corporation abused its dominance in cardiac mapping technology to illegally tie other healthcare services. ( MedTech [[link removed]] Dive [[link removed]])

The Department of Justice is investigating live entertainment corporation Live Nation over potential illegal collusion with competitors over cancellation and refund policies during the Covid-19 pandemic. ( Reuters [[link removed]])

The European Union and the United Kingdom announced a framework for cooperation on antitrust enforcement actions. The final agreement is part of a larger deal to ease trade barriers and increase security coordination. ( Wall [[link removed]] Street Journal [[link removed]])

We appreciate your readership. Please consider making a contribution to support the continued publication of this newsletter.

DONATE [[link removed]] 📈 VITAL STAT: 700

The number of legislative proposals being considered by U.S. states to regulate AI, according to [[link removed]]the Business Software Alliance. Legislation is mostly aimed at addressing high-risk uses of AI, deepfakes, and government use of AI. ( Business [[link removed]] Software Alliance [[link removed]])

📚 WHAT WE'RE READING:

Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI [[link removed]] — Journalist Karen Hao goes deep into the founding and evolution of artificial intelligence megacorporation OpenAI. In her eye-opening account, Hao recounts the startup’s gradual drift from a small, mission-oriented nonprofit focused on responsible development to a cutthroat corporate actor whose products have spurred an arms race — with the entire world potentially caught in the crossfire.

Order Sandeep Vaheesan’s book:

Democracy in Power: A History of Electrification in the United States [[link removed]] examines the history—and presents a possible future—of the people of the United States wresting control of the power sector from Wall Street, including through institutions like the Tennessee Valley Authority and rural electric cooperatives.

🔎 TIPS? COMMENTS? SUGGESTIONS?

We would love to hear from you—just reply to this e-mail and drop us a line. Give us your feedback, alert us to competition policy news, or let us know your favorite story from this issue.

SUBSCRIBE TO OUR NEWSLETTER [[link removed]] DONATE [[link removed]] Share [link removed] Tweet [link removed] Share [[link removed]] Forward [link removed]

Open Markets Institute

655 15th St NW, Suite 800, Washington, DC xxxxxx

We thought you'd like to be in the know about competition policy news. Liked what you read? Please forward to a friend or colleague.

Written and edited by: Barry Lynn, Karina Montoya, Austin Ahlman, Ezmeralda Makhamreh, and Anita Jain.

Preferences [link removed] | Unsubscribe [link removed]
Screenshot of the email generated on import

Message Analysis