[[link removed]]
THE TRUE THREAT OF OPENAI
[[link removed]]
Andrew Deck
September 17, 2025
The Nation
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
_ Karen Hao’s recent book on the company argues that its ambitions
are not merely about scale or the market but the creation of a global
force that rivals a colonial power of old. _
Open AI CEO Sam Altman speaks during a talk session with SoftBank
Group CEO Masayoshi Son at an event titled “Transforming Business
through AI” in Tokyo, Japan, 2025, Tomohiro Ohsumi / Getty Images
In the opening pages of _Empire of AI_, Karen Hao writes that “this
is not a corporate book.” For readers hungry for an insider’s view
of OpenAI, though, there is plenty to chew on. Hao pulls together a
meticulous history of the company and its most headline-making dramas.
That includes the splintering off of key executives and researchers to
found the rival AI company Anthropic; the internal scramble to scale
up ChatGPT as it became the fastest-growing consumer application
[[link removed]] in
Silicon Valley history; and the brief but dramatic ousting of CEO Sam
Altman by the OpenAI board in 2023.
Hao’s reporting makes for a forensic and comprehensive look at the
company: She interviews more than 90 current and former OpenAI
executives and employees, and these conversations are bolstered by
company memos, Slack messages, and interviews with dozens of
competitors and critics across the AI industry. Corporate tribalism,
the ills of founders’ syndrome, and start-up culture self-parody are
all captured by Hao in vivid detail. Not once but twice, she recounts
corporate retreats where former OpenAI chief scientist Ilya Sutskever
burned a wooden figure in effigy representing deceitful AI
superintelligence.
Books in review
Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI
by Karen Hao
Buy this book [[link removed]]
Hao is right, in one sense, to say that _Empire of AI_ is not a
“corporate book,” given how little direct access to the company
she received. She writes that OpenAI’s communications team rescinded
an invitation to interview employees at its San Francisco
headquarters. This was not the first time that Hao had been rebuffed
by the company. After she wrote the first major profile of OpenAI in
2019 for _MIT Technology Review_, the company refused to speak with
her for three years. At the time, OpenAI was still presenting itself
as an idealistic and transparent research nonprofit, even though
Hao’s blistering profile
[[link removed]] anticipated
its creep toward commercialization and anticompetitive practices.
With new details on the company’s inner workings and a character
study of Altman in particular, _Empire of AI_ checks all the boxes
of a conventional Silicon Valley page-turner. Beneath the corporate
history, though, is a more ambitious project, one seeded in
Hao’s earlier reporting
[[link removed]] on
the company. In her eyes, OpenAI’s ambitions are imperial and its
ways of doing business have reenacted the structure of European
empires. Hao argues that OpenAI uses altruistic and utopian
rhetoric—positing a societal abundance that will come about through
increasing automation and superintelligence—to justify its rapid
growth. And she finds that in achieving that growth, OpenAI has
exploited environmental resources and human labor, taking from the
global majority to consolidate the wealth of a small number of
companies and individuals in the United States. “OpenAI is now
leading our acceleration towards this modern-day colonial world
order,” she writes.
This assertion is boldly stated and compellingly argued by Hao,
although it’s often relegated to the background as she returns to
episodes of corporate intrigue at OpenAI’s offices in San Francisco.
At their strongest, Hao’s claims are rooted in both place and
people, backed by her reporting on AI developments in half a dozen
countries. In Colombia, Hao profiles a refugee fighting for pennies on
gig-work platforms while labeling the data used to train generative AI
models. In Chile and Uruguay, she introduces us to activists fighting
off proposed data centers, which threaten to usurp land and siphon
potable drinking water from communities facing drought. In Kenya, she
meets an outsourced worker who was paid just a couple of dollars an
hour to filter through the most violent and toxic content produced by
ChatGPT.
Hao’s reporting strips away the veneer of total automation that has
been lathered onto generative AI products. She lays bare how
precarious the human labor and vital natural resources are that power
these products, and how the rise of the AI industry in Silicon Valley
has impacted communities throughout its global supply chain. She also
goes one step farther and proposes a diagnosis for what ails OpenAI
and the ethos that has infected the industry at large. And she names
it outright: the pursuit of scale.
Hao paints scaling as a core tenet of the company’s business model
and Altman’s leadership agenda. OpenAI has hinged its success on
exponential, unencumbered growth. This hunger for larger models—and,
in turn, more data, more water, and more land to develop them—echoes
the expansionist ideologies that underlay European imperial projects.
And like those projects, OpenAI’s expansion has come at a human
cost.
OpenAI is hardly the first Silicon Valley company to resemble a
colonial power. And while Hao spends little time drawing parallels to
this growth in recent history, Silicon Valley’s denizens have long
prayed at the temple of scale. With Google and Meta setting the pace,
consumer technology companies over the past two decades have chased
endless user acquisition and “emerging market” penetration.
Whether Netflix, Amazon, or Twitter, these companies have traversed
every corner of the globe to acquire new customers. This ability to
scale technology platforms globally has in turn resulted in
multibillion-dollar valuations, worth contingent on each company’s
continued expansion. Contraction of even the smallest order spells
investor crises, as Facebook saw in 2022, when it lost users for the
first time
[[link removed]] since
the company’s founding nearly 18 years earlier. The following
quarter, Facebook’s perpetual-growth machine continued to chug
along, adding 200 million new users. In Silicon Valley, expansion is
doctrine.
The unchecked pursuit of scale translates new users into dollar
amounts, which invariably leads to inequities. Users in regions
considered the least valuable to these companies’ bottom lines were
onboarded en masse, their personal data extracted and monetized, but
they were left without the platform safety measures and investment
necessary to protect them. The ensuing harms include, but are not
limited to, Facebook’s complicity in genocide in Myanmar
[[link removed]] and Ethiopia
[[link removed]],
the use of social media to mark labor activists
[[link removed]] and
political dissidents for assassination in the Philippines, and
the exploitation of migrant workers
[[link removed]] in
Amazon’s Saudi Arabian warehouses.
OpenAI now sits firmly among this cadre of Silicon Valley tech giants,
recently earning a valuation of over $300 billion
[[link removed]].
While its expansion strategy at times mimics that of Meta or Google,
the mechanics of its scaling, and the harms it has left in its wake,
are distinct.
In 2015, OpenAI was founded as a research nonprofit. Much like its
predecessors that proclaimed they were “connecting the world,”
OpenAI had its own lofty mission: to develop AI that “benefits all
of humanity.” In 2019, Altman, one of several cofounders of the
company, stepped into the role of CEO. His previous work had been as
president of Y Combinator [[link removed]], the
most influential start-up accelerator in Silicon Valley. In that
capacity, Altman helped start-ups get off the ground with seed funding
and mentorship. Far from being a machine-learning expert, Altman was
skilled in go-to-market strategies and glad-handing at pitch meetings.
He counted some of Silicon Valley’s most influential billionaires as
mentors, including Peter Thiel and OpenAI cofounder Elon Musk.
Under Altman’s leadership, OpenAI came to equate a naked thirst for
market dominance with benevolence. The company framed itself as the
only organization that could be trusted to usher in unparalleled
advancements in machine learning and the looming emergence of AI
superintelligence. In the process, OpenAI stopped publicizing many of
its model-building practices and turned its back on the peer-review
process. This was not about preserving trade secrets, according to
Altman and others in OpenAI’s leadership, but done as a way to
ensure the “safety” of all humanity.
Years before the release of ChatGPT, Altman and other executives had
resolved to improve OpenAI’s internal deep-learning techniques
exponentially. Hao’s reporting shows that the company achieved this
goal by pioneering [[link removed]] the concept of
“scaling laws.” Scaling laws hold that there is a predictable
relationship between an AI model’s performance and three variables:
the amount of training data put into a model; the amount of
computational resources used to train a model; and the model’s size,
or how many parameters it has. Increasing these variables will
proportionally improve the model’s performance. In other words,
OpenAI’s theory of development hinged on building increasingly
larger, more data-hungry, and more resource-intensive models.
Scaling laws underlined OpenAI’s early decision to, in effect,
scrape all of the Internet—or as much as it found at its
disposal—and deal with the inevitable copyright litigation later.
Rather than improve the quality of its training data, or improve the
quality of the neural networks used to process that data, OpenAI chose
to accumulate gigantic datasets. A common refrain in the OpenAI
offices, Hao reports, was that “the secret of how our stuff works
can be written on a single grain of rice.” That secret is the
word _scale_. OpenAI was so instrumental in popularizing scaling laws
across the industry that, at times in the book, Hao simply refers to
them as “OpenAI’s Laws.”
The splashy arrival of ChatGPT in November 2022 proved that OpenAI’s
scaling laws were working, and it turned Silicon Valley on its head.
In December of that year, Google management issued an internal “code
red
[[link removed]]”
alert on ChatGPT, which signaled that ChatGPT threatened its core
search business model. Meta made its second pivot in as many years,
turning away from the uncanny virtual playgrounds of the
“metaverse” and toward AI product development. That year, OpenAI
completed its turn from nonprofit research to commercial product
development. However, _Empire of AI _shows that, far from ushering
in a new Silicon Valley order, OpenAI’s disruption did little to
challenge scaling as a fundamental principle of Big Tech. In the
generative AI boom, the gospel of scaling has simply been refurbished,
and OpenAI has become its latest evangelist.
Last fall, in a blog post titled “The Intelligence Age
[[link removed]],” Sam Altman reflected on his
company’s strategy in the simplest of terms. “In 15 words: deep
learning worked, got predictably better with scale, and we dedicated
increasing resources to it,” he wrote. Hao calls attention to the
many silences in Altman and others’ recounting of this strategy,
most importantly by giving voice to the “ghost workers
[[link removed]]” that made scaling laws feasible in the
first place. As the CEO of Appen, one of the vendors that has
connected OpenAI with contractors in the Global South, told Hao,
“The challenge with generative AI is, the inputs are the entire
corpus of humanity. So you need to control the outputs.”
The work of controlling OpenAI’s outputs often falls on data
workers, frequently hired remotely in Venezuela, India, the
Philippines, and Kenya by way of gig-work platforms. If there was any
doubt about what service these middlemen provide, the leading vendor
in the industry is named Scale AI. Hao spotlights several contractors
throughout the book, detailing the dire circumstances and precarity
that led them to data work and how their dependence on each
platform’s piecemeal assignments and payouts was exploited
In one chapter, Hao recounts meeting Mophat Okinyi in Nairobi, Kenya.
Okinyi had been contracted by Sama, which had been a leading vendor in
the past for Facebook’s content moderation. In 2021, Sama workers
were tasked with annotating text produced by large language models
(LLMs) in an effort to train OpenAI’s moderation filters. Rather
than moderating real user posts, as the workers on the Facebook
projects had done, Okinyi reviewed AI-generated text produced by LLMs,
including the most extreme outputs OpenAI researchers could muster.
“The details grew excruciatingly vivid: parents raping their
children, kids having sex with animals,” Hao writes, explaining that
Okinyi had been placed on the “sexual content” team and tasked
with reviewing 15,000 pieces of content each month. His work was done
before ChatGPT even became available to the public, as part of a
preemptive effort to filter out the most hateful elements found in its
underlying models. OpenAI had scraped the worst that the Internet has
to offer and fed it into its LLMs; Okinyi was paid to clean up the
toxic waste it was now spewing out.
It was invaluable work, but Okinyi was paid less than $13 a day for
it. The trauma he suffered reading AI-generated fantasies of sexual
violence for several hours each day sent Okinyi spiraling into a
depression and contributed to his wife’s decision to leave him.
“Companies pad their bottom line, while the most economically
vulnerable lose out and more highly educated people become
ventriloquists for chatbots,” Hao writes.
Parallel to scaling the training data, OpenAI also doubled down on its
efforts to scale computational power. This required more specialized
chips and larger data centers, warehouses filled wall-to-wall with
droning computers. OpenAI’s computational infrastructure has been
provided for years through its partnership with Microsoft. At times,
Altman would call Microsoft CEO Satya Nadella
[[link removed]] on
a daily basis to beg, saying, “I need more, I need more, I need
more.”
Hao tracks this push to expand computational power to arid regions in
Chile and Uruguay where AI companies, including Microsoft, have
stealthily laid plans to build massive data centers to meet the
skyrocketing demand. The size of these new centers exceeds mere
football fields, on a par instead with university campuses. In
addition to concerns about carbon emissions
[[link removed]],
AI data centers produce extreme heat that requires tremendous amounts
of water to cool down. Despite this, many data centers for both
Microsoft and Google have been proposed in areas facing extreme
drought, including Uruguay.
“They are extractivist projects that come to the Global South to use
cheap water, tax-free land, and very poorly paid jobs,” Daniel Pena,
a scholar in Montevideo, told Hao. Pena sued the Uruguayan
environmental ministry to force it to reveal the water-usage terms for
a proposed Google data center near the city, a suit he won in 2023.
That year, thousands of protesters took to the streets to oppose the
government’s appeasement of these water-intensive industries. Hao
recounts seeing protest graffiti scrawled on Montevideo’s city walls
during her reporting trip there: “This is not drought. It’s
pillage.”
Microsoft, Google, Amazon, and Meta spend more money building data
centers than almost all of the other companies in the world combined.
And there is little sign of the construction slowing. In January,
OpenAI announced that it will lead a project to build $500 billion
worth of AI infrastructure alongside partners at Softbank and Oracle,
with support from Microsoft. The project, which they christened
Stargate [[link removed]],
has also been backed by the Trump administration, despite reported
struggles
[[link removed]] to
meet its initial building goals. Hao’s term for this type of
rapacious expansion: “the mega-hyperscale.”
In 2019, Sutskever, OpenAI’s former chief scientist, suggested that
one day, “I think it’s pretty likely the entire surface of the
earth will be covered with solar panels and data centers.”
Sutskever, conveniently, fails to mention where the people would go.
_Empire of AI _is a tonic for both uncritical AI hype and doomerist
prophecies that AI supercomputers will soon end humanity. (If you’re
not familiar with the latter, the book is also a helpful primer
on effective altruism
[[link removed]].)
Hao is far from the first to make assertions about the harms of
scaling AI. Her reporting builds on a body of scholarship, including
writings on “data colonialism
[[link removed]],” a concept
coined back in 2019. She also pulls on threads stitched into the
groundbreaking papers on AI ethics, including “On the Dangers of
Stochastic Parrots
[[link removed]],”
the controversies
[[link removed]] around
which feature prominently in her book’s early chapters.
Hao grounds this scholarship by taking it out of the cloisters of
scholarly journals and AI research conferences and touching down
instead in the lives of workers, community organizers, and activists.
Her reporting gives a sense of urgency to an argument that could
easily feel overly conceptual. The AI industry isn’t simply
re-creating familiar colonial power structures through its scaling
philosophies; it is also taking an active toll on communities across
the globe. In drawing a straight line between OpenAI’s “scaling
laws” and these harms, Hao makes clear that if expansionist
ideologies continue to underlie the AI industry’s model development,
this exploitation of vulnerable workers and our environment will only
worsen.
_Empire of AI_ plainly shows that “AI safety” should not be
spoken about in terms of hypotheticals. The most pressing dangers we
face because of this technology do not lie in the realm of some
theoretical coming singularity. The harms of AI have already arrived,
and we have an obligation to oppose them. In the book’s closing
pages, Hao argues that we can still change course by focusing on
building smaller, “task-specific” AI models; that data workers can
be paid fair wages; and that we can reap the benefits of AI without
building data centers that consume our planet. If we continue on our
current course, however, the inertia of hyperscaling will only
increase and the expansion of OpenAI and its imitators will become
that much harder to slow. At stake are the conservation of our planet,
the preservation of cultural knowledge, and the dignity of workers the
world over.
_ANDREW DECK is a a staff reporter at Nieman Lab. He covers the rise
of AI and its impact on journalism and the media industry._
_Copyright c 2025 THE NATION. Reprinted with permission. May not be
reprinted without permission
[[link removed]].
Distributed by PARS International Corp
[[link removed]]. _
_Founded by abolitionists in 1865, The Nation has chronicled the
breadth and depth of political and cultural life, from the debut of
the telegraph to the rise of Twitter, serving as a critical,
independent, and progressive voice in American journalism._
_Please support progressive journalism. Get a digital subscription
[[link removed]] to
The Nation for just $24.95!_
_Donate $10 monthly to The Nation.
[[link removed]]_
* Book Review
[[link removed]]
* AI
[[link removed]]
* colonialism
[[link removed]]
* exploitation
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
INTERPRET THE WORLD AND CHANGE IT
Submit via web
[[link removed]]
Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]
Twitter [[link removed]]
Facebook [[link removed]]
[link removed]
To unsubscribe, click the following link:
[link removed]