[The moral of the story is that there is nothing about AI
technology that should lead to mass unemployment and inequality. If
those are outcomes, it will be the result of how we structured the
rules, not the technology itself.]
[[link removed]]
AI, JOB LOSS, AND PRODUCTIVITY GROWTH
[[link removed]]
Dean Baker
June 12, 2023
CEPR [[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
_ The moral of the story is that there is nothing about AI technology
that should lead to mass unemployment and inequality. If those are
outcomes, it will be the result of how we structured the rules, not
the technology itself. _
, David S. Soriano Creative Commons Attribution-Share Alike 4.0
It is really painful to see the regular flow of pieces debating
whether AI will lead to mass unemployment. Invariably, these pieces
are written as though the author has taken an oath that they have no
knowledge of economics whatsoever.
The NYT gave us the latest example
[[link removed]] on
Sunday, in a piece debating how many jobs will be affected by AI. As
the piece itself indicates, it is not clear what “affected by AI”
even means.
What percent of jobs were affected by computers? The answer would
probably be pretty close to 100 percent, if by “affected” we mean
in some way changed. If by affected, we mean eliminated, then we
clearly are talking about a much smaller number.
Thinking of AI like we did about computers is likely a good place to
start. First of all, we should remember that there were predictions of
massive layoffs and unemployment from computers and robots for
decades. This did not happen.
In fact, we have a measure of the extent to which computers, robots,
and other technology are displacing workers. It’s called
“productivity growth,” and the Labor Department gives us data on
it every quarter.
Productivity is the measure of the value of output that a worker can
produce an hour. We expect this to increase through time as we get
better equipment and software, we learn how to do things better, and
workers get more educated.
For the last two centuries, productivity growth has been a normal
feature of the U.S. economy, and in fact, most normally functioning
economies around the world. This is the basis for rising living
standards through time. It is the reason that we can feed our whole
population, and still export food, even with just around 1.0 percent
of the workforce in agriculture, as opposed to more than 50 percent in
the 19th century.
The big question is the rate at which productivity grows. Productivity
growth has actually been pretty slow
[[link removed]] in recent years. It
averaged just 1.3 percent annually since 2006. By contrast, it
averaged close to 3.0 percent in the quarter century from 1947 to
1973.
Rather than being a period of mass unemployment and declining living
standards, the rapid productivity growth in that period was associated
with widespread improvements in living standards. We went from
depression era living standards in 1947 to a prosperous middle-class
society by the end, as ordinary workers were able to afford to buy
houses and cars, and send their kids to college.
We should think of the promise of AI in the same way. The first
paragraph in the NYT piece warns/promises:
“In 2013, researchers at Oxford University published a startling
number about the future of work: 47 percent of all United States jobs,
they estimated, were ‘at risk’ of automation ‘over some
unspecified number of years, perhaps a decade or two.’”
That warning is pretty vague but let’s say that we could use AI to
eliminate 47 percent of current jobs over two decades. If we held GDP
constant over this period, that would roughly correspond to the 3.0
percent annual productivity growth we saw during the post-World War II
boom. And, just as we saw high levels of employment through the
post-war boom (unemployment got down to 3.0 percent in 1969), we could
maintain high employment if the economy had the same sort of rapid
growth that we had in that quarter century. That will be a policy
choice not an issue determined by technology.
WILL PROSPERITY BE SHARED?
In the post-war boom the benefits from productivity growth were widely
shared. To be clear, not everyone was doing great. Blacks were openly
discriminated against, and virtually excluded from many better-paying
jobs. The same was true of women, as the barriers were just beginning
to come down. But the gains from productivity growth went well beyond
just a small elite at the top.
Whether that happens with AI and related technologies will depend on
how we as a society choose to structure the rules around AI. One
reason why Bill Gates and others in the tech industry became
incredibly rich was that the government granted patent and copyright
protection for computer software. That was a policy choice. If we did
not have these government-granted monopolies, Bill Gates would
probably still be working for a living. (Okay, maybe he would be
collecting his Social Security by now.)
These monopolies serve a purpose, they provide an incentive to
innovate, but it’s not clear they have to be as long and as strong
as is currently the case. Also, there are other ways to provide
incentives. For example, the government can pay for people to do the
work, as it did when it paid Moderna roughly a $1 billion to develop
and test its Covid vaccine. Of course, the government also gave
Moderna control over the vaccine, allowing the company’s stock
to generate
[[link removed]] five
Moderna billionaires in a bit over a year.
It is not hard to envision routes through which AI can lead to
widespread prosperity in a way comparable to what we saw in the
post-war boom. Suppose that we don’t have government-granted
monopolies restricted access to the technology, so that it can be
freely used.
In that world, I could likely go to a medical technician (someone
trained in performing clinical tests and entering data), who could
plug various test results into an AI system, and it would tell me if I
have heart problem, kidney problem, or anything else. Rather than
seeing a highly paid physician, I could have most of my health care
needs met with this technology and a reasonably compensated medical
professional, who may get less than one-third of the pay of a doctor.
There would be a similar story with legal assistance. Certainly, for
standard legal processes, like preparing a will or even arranging a
divorce, AI would likely be up to the task. Even in more complicated
cases, AI could likely prepare a brief, which a lawyer could evaluate
and edit in a fraction of the time it would take them if they were
working from scratch.
People have pointed out that AI makes mistakes. There have been many
instances where we have heard of AI systems inventing facts that are
not true or citing sources that don’t exist. This is a real problem,
but presumably one that will be largely fixed in the not distant
future. We shouldn’t imagine that AI systems will ever be perfect,
but the number of errors they make will surely be reduced as the
technology is developed further.
In addition, it is important to remember that humans also make errors.
There are few of us that cannot recall a serious mistake that a doctor
made in diagnosing or treating our own condition or a close family
member. A world without mistakes does not exist and cannot be the
basis of comparison. We need AI to be at least as good as the workers
it is displacing, but that doesn’t mean perfect.
AI AND THE DISTRIBUTION OF INCOME
We structured our economy over the last four decades so that most of
the gains from the productivity growth over this period went to those
at the top. Contrary to what is often asserted, most of the gains
actually did not go to corporate profits, they went to workers at the
top of the pay ladder, like CEOs and other top management, Wall Street
types, highly paid tech workers, and doctors and lawyers and other
highly paid professionals. These workers used their political power to
ensure that the rules [[link removed]] of
the economy were designed to benefit them.
Whether or not that continues in the era of AI will depend on the
power of these groups relative to less highly paid workers. Just to
take an obvious example, doctors may use their political power to have
licensing restrictions that prevent less highly trained medical
professionals from making diagnoses and recommending treatments based
on AI.
If that seems far-fetched, we already have laws that make it very
difficult for even very well-trained foreign doctors from practicing
in the United States. While the cry of “free-trade” was used to
expose manufacturing workers to international competition, and thereby
depress their pay, it almost never came up with doctors and other
highly paid professionals.
Anyhow, we may well see a similar story with AI, where highly paid
professionals use their political power to limit the uses of AI and
ensure that it doesn’t depress their incomes. This also is an issue
with ownership of the technology itself. If we don’t allow for
strong patent/copyright monopolies in AI, and make non-disclosure
agreements difficult to enforce, we can ensure the technology is more
widely spread and cheap. This would mean that the gains are widely
shared and not going to a relatively small group of Bill Gates types.
It is also important to understand how high incomes for a small group
depress incomes for everyone else. Most of us don’t directly pay for
our own health care. We have insurance provided by an employer or the
government. However, insurers are not charities. (You knew that.)
If insurers have to pay out lots of money to doctors, then it will
mean that our employers pay higher premiums, which they will look to
take out of our paychecks. Alternatively, if the government is picking
up the tab, there will be less money to pay for child tax credits, day
care, and other good things.
Also, when the lawyers, doctors, tech workers and other would be
beneficiaries from AI get high incomes, they buy bigger and more
houses. That raises the cost of housing for everyone else. We can and
should build more housing, but when you have a small segment of the
population that has far money than everyone else, it is difficult to
keep housing affordable for ordinary workers.
Anyhow, the point here is straightforward. Keeping down the pay for
those at the top is not an issue of jealously. The more money that
goes to the top, the less there is for everyone else, as long as we
have not structured the rules in a way that takes away the incentive
to be innovative and productive.
FEAR THE RICH, NOT AI
The moral of the story is that there is nothing about AI technology
that should lead to mass unemployment and inequality. If those are
outcomes, it will be the result of how we structured the rules, not
the technology itself. We need to keep our eyes on the ball and
remember that structuring the rules is a policy choice.
And, one other point: those who want to structure the rules so that
all the money goes to the top will want to say the problem is
technology. It is much easier for them to tell the rest of us that
they are rich and everyone else is not because of technology, rather
than because they rigged the market. Keep that in mind, always.
* artificial intelligence
[[link removed]]
* unemployment
[[link removed]]
* productivity
[[link removed]]
* regulation
[[link removed]]
* profits
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
INTERPRET THE WORLD AND CHANGE IT
Submit via web
[[link removed]]
Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]
Twitter [[link removed]]
Facebook [[link removed]]
[link removed]
To unsubscribe, click the following link:
[link removed]