Welcome to The Corner. In this issue, we explore how copyright protections, currently under threat from the Trump administration, stand as a xxxxxx against Big Tech‘s use of copyrighted material to turbocharge AI growth. ![]()
Last week, the Open Markets Institute released a groundbreaking report Engineering the Cloud Commons: A Blueprint for Resilient, Secure, and Open Digital Infrastructure calling for public utility regulation and structural separation, as well as investment in digital public infrastructure. In conjunction with the release, OMI hosted a conference, entitled “Engineering the Cloud Commons: Tackling Monopoly Control of Critical Digital Infrastructure,” convening leading experts to discuss these issues as well as potential solutions. Featured panelists included Vanderbilt University law professor Ganesh Sitaraman; Paris Marx, host of the Tech Won’t Save Us podcast; Amba Kak, co-executive director of the AI Now Institute, and; Trey Herr, senior director of the Cyber Statecraft Initiative at the Atlantic Council among others. The conference discussion received coverage in Global Competition Review. Watch the discussion here and read the report, written by OMI’s Europe director Max von Thun and EU research fellow Claire Lavin, here. ![]() Journalists and Artists Lose Out to AI Corporations as Trump Fires Copyright Director Karina Montoya The abrupt firing of the U.S. Copyright Office Shira Perlmutter by President Trump, following the agency’s draft reporton copyright and generative artificial intelligence, marks a new chapter in the battle to prevent Silicon Valley from advancing an AI business model based on using copyrighted works to train their systems, without the consent of — or compensation to — their creators. Perlmutter’s firing has sparked a variety of speculation about the motivations behind it. Some saw it as a power play led by Elon Musk, based both on his close relationship to Trump and his new AI business venture. More recent reports, though, show Google and Meta also paid lobbyists to lead a campaign against Perlmutter while her office prepared to issue its AI report. What’s clear is the dominant AI corporations don’t want copyright law to stop them from using other people’s work for their own private purposes. In the draft report, the Copyright Office focused on whether AI companies should compensate copyright holders for using their works to train AI models, following a 2023 public consultation (in which Open Markets participated). That question is also at the heart of more than 20 lawsuits making their way through U.S. courts. The Copyright Office’s opinion is not legally binding, but courts routinely rely on such expert research to make decisions. Google, Meta, Amazon, and Microsoft, as well as some of their AI rivals, fiercely contend that fair use should apply to the internet content and databases they use to build their AI models. They also argue that enforcing copyright law or implementing a new content licensing regime would impede “innovation” and stall progress on generative AI. Critics of copyright enforcement for the AI market often point to how some corporations have used the law to fortify their market power. In recent decades, for instance, U.S. copyright law has often benefited dominant entertainment companiesrather than the original individual creators. In the draft report, the Copyright agency said the first key question in assessing fair use of copyrighted works is what the AI model will ultimately be used for. For instance, using the copyrighted books to train an AI model to remove harmful content online is very different than use of those same books — or images or videos — to train an AI model to produce content “substantially similar to copyrighted works in the dataset.” The agency also calls for developing a consent framework beyond the opt-out standard, which is when tech companies first collect user data and later ask for permission to profit from it. Dominant AI corporations have exploited putting the onus on users to opt out of data collection as a license to gather, store, and profit from copyrighted works. When creators specifically opt out of allowing use of copyrighted materials, AI corporations may stop collection, but they can continue using previously appropriated works. The report also warns that AI models trained on copyrighted works can hurt original creators’ property rights in a variety of ways. This includes by preventing them from licensing the use of their works to others, and by flooding the market with stylistic imitations that diminish the value of their original works. The Copyright Office’s guidance came at a pivotal time for AI regulation around the world. In February, in Thomson Reuters v. ROSS, a U.S. federal court rejected a fair use defense of copyrighted works in training AI and machine learning systems, setting a potentially important precedent for other similar cases in training generative AI. In the UK, a massive campaign by news media and creators to raise awareness of the same risks the U.S. Copyright Office describes led the UK Parliament to reconsider changes in legislation that would have hurt creators and journalism. Last week, the California Assembly passed the AI Copyright Transparency Act, a first step toward transparency and accountability for the use of copyrighted works in AI model training. In both cases, though, legislatures are still placing too much of the burden on creators to detect and challenge misuse of their works in AI. Big Tech’s growing data monopolies in AI continue to pose a real and growing threat to creative industries and journalism. The time has come to complete a solid new framework that makes copyright work for the creators that it’s meant to protect in the first place. ![]()
Open Markets Legal Director Challenges Abundance Agenda Stance Against Regulation Sandeep Vaheesan, legal director at the Open Markets Institute and author of the recent book Democracy in Power: A History of Electrification in the United States, offers a provocative critique in the Boston Review of the 2025 book Abundance by journalists Ezra Klein and Derek Thompson, which argues that excessive public regulation has hindered progress in addressing the nation’s housing crisis and combating climate change. In his essay, Vaheesan outlines why this deregulatory vision is likely to further empower oligarchy rather than deliver broadly shared prosperity and instead proposes a a 21st-century New Deal rooted in robust public investment, corporate accountability, and democratic participation, similar to the blueprint for a publicly led and managed path to decarbonization offered in his own 2024 book Democracy in Power. 📝 WHAT WE'VE BEEN UP TO:
🔊 ANTI-MONOPOLY RISING:
We appreciate your readership. Please consider making a contribution to support the continued publication of this newsletter. 📈 VITAL STAT:700The number of legislative proposals being considered by U.S. states to regulate AI, according to the Business Software Alliance. Legislation is mostly aimed at addressing high-risk uses of AI, deepfakes, and government use of AI. (BusinessSoftware Alliance) 📚 WHAT WE'RE READING:Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI — Journalist Karen Hao goes deep into the founding and evolution of artificial intelligence megacorporation OpenAI. In her eye-opening account, Hao recounts the startup’s gradual drift from a small, mission-oriented nonprofit focused on responsible development to a cutthroat corporate actor whose products have spurred an arms race — with the entire world potentially caught in the crossfire. ![]() Order Sandeep Vaheesan’s book: Democracy in Power: A History of Electrification in the United States examines the history—and presents a possible future—of the people of the United States wresting control of the power sector from Wall Street, including through institutions like the Tennessee Valley Authority and rural electric cooperatives. 🔎 TIPS? COMMENTS? SUGGESTIONS? We would love to hear from you—just reply to this e-mail and drop us a line. Give us your feedback, alert us to competition policy news, or let us know your favorite story from this issue. |