Will artificial intelligence bring doom or progress?

Renowned physicist Stephen Hawking’s recent death brought new attention to his dire predictions about artificial intelligence (AI). Professor Hawking feared the loss of jobs, rising inequality, and the potential for malevolent AI to threaten human existence. Does a disaster loom? If we remain in control, I think that AI will bring progress.

Robots already do millions of jobs humans used to, and AI could automate countless more. Self-driving vehicles are on the horizon. But AI will automate knowledge-based service jobs in journalism (AI-written news stories) and the law (generating many legal documents).

Will AI mean an end of jobs for humans? No, because AI will not abolish the basic economic fact of scarcity. Our basically unlimited desires for more and better goods and services exceed what we can produce using available resources and current (or imaginable) technology. We will never run out of things to do.

What will humans do as AI develops? Automatic teller machines (ATMs), a primitive form of AI, provide a possible preview. U.S. banks have around 400,000 ATMs, and yet employ more people than when the first ATM debuted in 1969. Intelligent technology reduced the number of human tellers required in a branch, yet banks increased the number of branches.

The automation of good paying jobs in law and journalism may appear likely to increase inequality. We could end up with a few high paying jobs working with the robots or AI, with most workers pushed into low paying jobs. But automating routine bank transactions let employees handle more complicated transactions and offer new services. Furthermore, automation lowers the cost of traditional goods and services. A lower cost of living allows a modest salary to sustain a middle-class standard of living.

The prospects are rosy as long as we remain in control of AI. But what about malevolent machines seeking to enslave or destroy their makers? Others share Professor Hawking’s fear. The Bulletin of the Atomic Scientists’ annual “Doomsday Clock” calculation recently began listing potential automated, intelligent killing machines as a doomsday threat. The theme has been a staple of science fiction movies and literature, from the Terminator and Matrix series back to Frankenstein.

As an economist who struggles to use a smart phone, I can’t offer much insight on computer science and machine learning. Economics, though, can offer some perspective.

Good decisions balance benefits and costs, factoring in probabilities when outcomes are not sure things. We further need to express all benefits and costs in dollar amounts. What would be the “cost” of the destruction of civilization? To provide perspective, world GDP was $78 trillion in 2017, and world population exceeds seven billion. An appropriate figure would have many, many zeros.

With this staggering cost, no type of AI research with even a tiny chance of unleashing malevolent machines could seemingly pass a benefit-cost test. This is the economic logic behind the Precautionary Principle, which places the burden of proof to demonstrate safety on developers of potentially dangerous technology.

The problem, however, is identifying the exact types of AI research, if any, which might produce the rise of the machines. If we have no skill in this task, we cannot ban only truly dangerous research. The Precautionary Principle might require shutting down wide swaths of research on computers and automation, which becomes incredibly costly. And we still might fail to prevent the seemingly harmless application which spells doom.

One final consideration comes from the development of atomic weapons by the U.S. While some libertarians view military research critically, I see the Manhattan Project as demonstrating that the leaders of a freedom-loving nation could have good reasons for developing apocalyptic weapons. We may deliberately undertake the fateful AI research.

Artificial intelligence should contribute to progress as long as it remains under our control, improving our decisions in addition to increasing productivity. But if there is a risk of malevolent AI, we simply may not know enough to avoid the danger.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University.

Next Post

Uber to up its background checks for drivers

Yellowhammer News April 12, 2018