[It’s been a big week for AI regulation—or at least, the idea
of it. When it comes down to it, though, the US, the UK, and the EU
are taking different approaches to the problem.]
[[link removed]]
THE WORLD GRAPPLES WITH HOW TO REGULATE ARTIFICIAL INTELLIGENCE
[[link removed]]
Mathew Ingram
November 2, 2023
Columbia Journalism Review
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
_ It’s been a big week for AI regulation—or at least, the idea of
it. When it comes down to it, though, the US, the UK, and the EU are
taking different approaches to the problem. _
Prime Minister Rishi Sunak welcomes US Vice-President Kamala Harris,
during the AI safety summit, the first global summit on the safe use
of artificial intelligence, at Bletchley Park in Milton Keynes,
Buckinghamshire. Thursday November 2, 2023, Press Association via AP
Images
IT’S BEEN A BIG WEEK FOR AI REGULATION—or at least, the idea of
it. On Monday, the Biden administration published an executive order
[[link removed]] on
“the safe, secure, and trustworthy development and use of artificial
intelligence”; while AI has the potential to help solve a number of
urgent challenges, the EO said, the irresponsible use
[[link removed]] of
the same technology could “exacerbate societal harms such as fraud,
discrimination, bias, and disinformation” and create risks to
national security. Then, yesterday, the British government opened a
two-day summit on AI safety
[[link removed]] at
Bletchley Park, the site where code-breakers famously deciphered
German messages during World War II. Rishi Sunak, the prime minister,
said that AI will bring
[[link removed]] changes
“as far-reaching as the Industrial Revolution, the coming of
electricity, or the birth of the internet,” but that there is also a
risk that humanity could “lose control” of the technology. And the
European Union has been trying to push forward
[[link removed]] AI
legislation that it has been working on for more than
[[link removed]] two
years.
AI, and its potential risks and benefits, are at the top of many
agendas at the same time. Yesterday, Vice President Kamala Harris, who
is attending the Bletchley Park summit, gave a speech
[[link removed]] in
which she rejected “the false choice that suggests we can either
protect the public or advance innovation,” adding that “we
can—and we must—do both.” Ahead of time, a British
official told _Politico_
[[link removed]] that
the speech would show that the summit was a “real focal point” for
global AI regulation (even if, as _Politico_ noted, it “may
overshadow Bletchley a bit”). When it comes down to it, though, the
US, the UK, and the EU are taking different approaches to the
problem—differences that are, in many cases, the result of political
factors specific to each jurisdiction.
In the US, the Biden administration’s order aims to put some bite
behind voluntary AI rules
[[link removed]] that
it released earlier this year—but it doesn’t go as far as an
actual law because there’s no chance one of those would pass.
That’s because Congress—as Anu Bradford, a law professor at
Columbia University, told the _MIT Technology Review_
[[link removed]]—is
“deeply polarized and even dysfunctional to the extent that it is
very unlikely to produce any meaningful AI legislation in the near
future.” Partly as a result, some observers have accused the White
House of resorting to “hand waving” about the problem. The
full executive order is
[[link removed]] over
a hundred pages long; some of those are filled with definitions of
terms that not every reader will be familiar with (“floating point
operation”; “dual-use foundation model”), but there is also some
rambling as to the potential of AI, both positive and negative.
To add the aforementioned bite, the Biden administration took the
unusual step of invoking the Defense Production Act of 1950, a law
that is typically used
[[link removed]] during times
of national emergency but has been definitionally stretched in the
past. Biden’s order relies on the law compelling AI companies to
test their services and algorithms for safety, including through what
is known as “red teaming
[[link removed]],” whereby
employees try to use a system for nefarious purposes as a way of
revealing its vulnerabilities. Companies involved in AI research will
have to share the results of such testing with the government before
they release an AI model publicly, though that requirement will only
apply to models that have a certain amount of computing power: more
than a hundred septillion
[[link removed]] floating-point
operations (them again), according to the _New York Times_. Existing
AI engines that were built by OpenAI, Google, and Microsoft all meet
that threshold. But a White House spokesperson said that the rules
will likely only apply
[[link removed]] to
new models.
The order also outlines rules aimed at protecting against the
potential negative _social_ impacts of AI: for example, it directs
federal agencies to take steps to
[[link removed]] prevent
algorithms from exacerbating discrimination in housing, benefits
programs, and the criminal justice system (though how exactly they
should do so is unclear). And it directs the Commerce Department to
come up with “guidance” on how watermarks might be added
[[link removed]] to
AI-generated content, as a way of curbing the spread of artificial
disinformation. Critics, however, argue that asking for “guidance”
could amount to very little: as the _MIT Technology Review_ noted
[[link removed]],
there is currently no reliable way to determine whether a piece of
content was generated by AI in the first place.
Kevin Roose, of the _Times_, has argued that the order looks like an
attempt to bridge two opposing factions
[[link removed]] on
AI: some experts want the AI industry to slow down, while others are
pushing for “its full-throttle acceleration.” Those who fear the
development of superhuman artificial intelligence—including the
scientists and others who signed an open letter
[[link removed]] in
March, urging a halt to AI research in apocalyptic (if brief)
language—may cheer the introduction of new controls. Supporters of
the technology, meanwhile, may just be happy that the order won’t
require them to apply for a federal license
[[link removed]] to
conduct AI research, and won’t force AI companies to disclose
secrets such as how they train their models.
But as _The_ _Atlantic_ noted
[[link removed]]—and
as with all approaches that aim to please competing
constituencies—parts of the order “are at times in tension,
revealing a broader confusion over what, exactly, America’s primary
attitude toward AI should be.” And—as is also the case with such
approaches—not every constituency was pleased. James Broughel, an
economist at the Competitive Enterprise Institute, described the order
as “regulation run amok,” arguing that it suffers from a
[[link removed]] “classic
Ready! Fire! Aim! mentality” whereby it introduces invasive
regulations without first grasping the nature of the problem it is
trying to solve. Some of the requirements that sound positive, such as
the need for transparency around safety testing, could end up being
the opposite, Broughel argues
[[link removed]],
if they discourage AI companies from doing that kind of testing at
all. The order is “not a document about innovation,” Steve
Sinofsky, a former Microsoft executive, wrote
[[link removed]]. “It
is about stifling innovation.”
Whether the order achieves anything tangible remains to be seen, but
it is at least a timely topic of conversation for the UK’s AI Safety
Summit. Yesterday, Michelle Donelan, Britain’s current technology
minister (and past marketer for WWE wrestling
[[link removed]]),
released a policy paper called “The Bletchley Declaration” and
pledged that the summit will become a regular global event, with
future editions already slated to be held in South Korea in six months
and then in France. The declaration states that “for the good of
all, AI should be designed, developed, deployed, and used in a manner
that is safe [and] human-centric, trustworthy and responsible.”
It’s hard to disagree, but some observers saw the event in less
grand terms: as an attempt by Sunak to boost his flagging popularity
ratings at home. Writing for _The Guardian_, Chris Stokel-Walker
described the summit as the passion project of a prime
minister desperate for a
[[link removed]] good-news
boost as “his government looks down the barrel of a crushing
election defeat.”
Attendees at the summit include executives from
[[link removed]] Tencent
and Alibaba, two Chinese tech giants. Their invites were contentious
because of suspicions about China’s motives in the realm of AI. At
the summit, Chinese scientists
[[link removed]] signed
a statement referring to AI technology as an “existential risk to
humanity.” As such portents of AI doom multiply, some experts
believe that they could accelerate over-regulation—and that this in
turn could benefit large incumbents in the AI space, rather than
innovators. In a post on X, Yann LeCun, a noted AI expert who now
works for Meta, argued that such statements
[[link removed]] give
ammunition to voices lobbying for a total ban on AI research. LeCun
argues that this will result in “regulatory capture” and a small
number of companies from the US West Coast and China controlling the
industry.
While all this has been going on, the EU has been working to finalize
its AI Act
[[link removed]]—one
of the world’s first pieces of legislation targeted specifically at
AI—which proposes rules around everything
[[link removed]] from
the use of the technology to design chemical weapons to the use of
copyrighted content to train AI engines, something that authors and
other groups are currently suing over
[[link removed]] (as
my colleague Yona TR Golding
[[link removed]] and I
have written recently
[[link removed]]).
The law as drafted also requires companies
[[link removed]] with
AI engines to report their electricity use, among other measures. And
it separates AI companies and services into different categories based
on the risk they pose. Some European lawmakers said that they hoped
the bill would be
[[link removed]] finalized
by the end of this year and adopted in early 2024, before the next
European Parliament elections in June. But, as _The Verge_ notes,
some EU countries are still not in agreement
[[link removed]] on
parts of the law, and such expeditious passage thus looks unlikely.
In their haste to detail the long-term risks of AI, both the Biden
order and the EU’s proposed AI Act
[[link removed]] have
also been accused of overlooking important points
about _current_ dangers. _The Atlantic_ notes, for example, that
the Biden order mentions how AI technology could help mitigate climate
change—but not that
[[link removed]] large
AI engines consume
[[link removed]] immense
quantities of water. Another risk that doesn’t appear anywhere in
the US order is the potential
[[link removed]] for
AI deepfakes that could manipulate elections. Stefan van Grieken, the
CEO of the AI firm Cradle, told CNBC that this is akin to a conference
of firefighters that talks only about dealing with “a meteor strike
that obliterates the country.” Representatives from dozens of civil
society groups, meanwhile, wrote an open letter
[[link removed]] arguing
that the UK summit has excluded the workers and communities that will
be most affected by AI.
Others likely see the US and EU efforts as excessively stuck in the
mud. Last month, Marc Andreessen, a prominent Silicon Valley venture
capitalist who has invested in OpenAI, wrote an essay in which he
argued [[link removed]] that,
because AI has the power to save lives, any deceleration of research
will end up _costing_ lives. These preventable deaths, Andreessen
argued, are “a form of murder.” It may be hard to determine
exactly where to situate US, UK, and EU views about AI on the spectrum
from existential disaster to unparalleled opportunity. But thanks to
Andreessen, we now know where the outer bounds lie.
_MATHEW INGRAM is CJR’s chief digital writer. Previously, he was a
senior writer with Fortune magazine. He has written about the
intersection between media and technology since the earliest days of
the commercial internet. His writing has been published in
the Washington Post and the Financial Times as well as by Reuters
and Bloomberg._
_CJR’s mission is to be the intellectual leader in the rapidly
changing world of journalism. It is the most respected voice on press
criticism, and it shapes the ideas that make media leaders and
journalists smarter about their work. Through its fast-turn analysis
and deep reporting, CJR is an essential venue not just for
journalists, but also for the thousands of professionals in
communications, technology, academia, and other fields reliant on
solid media industry knowledge._
_Become a member of CJR [[link removed]] Donate to
CJR [[link removed]]_
* artificial intelligence
[[link removed]]
* regulation
[[link removed]]
* media
[[link removed]]
* Technology
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
INTERPRET THE WORLD AND CHANGE IT
Submit via web
[[link removed]]
Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]
Twitter [[link removed]]
Facebook [[link removed]]
[link removed]
To unsubscribe, click the following link:
[link removed]