73.9 F
Mobile
66.9 F
Huntsville
68.6 F
Birmingham
54.3 F
Montgomery

Dr. Daniel Sutter: Can we halt research?

Elon Musk, Apple co-founder Steve Wozniak, and Stability AI founder Emad Mostque have signed an open letter calling for a pause in cutting-edge artificial intelligence (AI) research.

But can we halt research, whether on AI or the gain of function research that may have produced the SARS-CoV-2 virus? How much control would be necessary, and what if others do not pause?

The letter reads in part, “Advanced AI could represent a profound change in the history of life on earth, and should be planned for and managed with commensurate care and resources. … Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. … (W)e call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

Four potential impacts motivate the proposed pause. Three involve economic and political impacts, like automating jobs and rendering humans obsolete. Since I recently discussed economic impacts, today will focus on the potential “loss of control of our civilization.” Will AI produce intelligent, malevolent machines in the mode of the “Terminator” movies?

Top experts do not rule out AI getting out of control. Is it possible to conduct research to prevent this? And if not, should we permanently halt the research?

Insurance is how markets ensure safety in research and production. If research could produce an explosion destroying the lab and surrounding neighborhood, the lab should carry insurance. Insurers would then impose safety and training requirements on the lab as conditions of coverage. Government might only need require that AI labs carry insurance. If no insurer would cover a lab at any price, the market halts research.

I strongly favor markets and insurance over government regulation. But AI presents a major problem. What would a war against the machines as in the “Terminator” movies cost? It may be hard to come up with enough zeros. The losses would (easily) bankrupt the insurance industry. And in any event, a hefty insurance payment will not help people enslaved in the “Matrix.”

Bankruptcy forces consideration of government regulation. Yet we still face problems. Do we even know how malevolent consciousness might develop? Suppose the AI pause letter signatories listed ten ways research might produce a science fiction nightmare. Would the path by which malevolent machines eventually emerge be on this list? If not, government regulation will not save us.

Perhaps then we should ban this research. A ban seemingly requires controlling all persons and facilities capable of research. Depending on the nature of the research, this may be very authoritarian. Fortunately, training an AI system like GPT-4 requires enormous computing power, energy, and brainpower. Only a few labs currently possess this capacity, making verifiable compliance at least plausible.

But innovators could devise distributed ways to assemble computer power to evade a ban. The Bitcoin blockchain and distributed denial of service attacks demonstrate the potential for coordinated decentralization.

Suppose U.S. labs halted AI research. Could we force the ban seems difficult to force on China, Russia, and other nations, especially since AI promises a path to technological superiority and world domination? Research has an arms race element to it. The Manhattan Project was needed in part to keep Nazi Germany from developing atomic weapons first.

The Obama Administration halted gain of function research on viruses in 2014. The COVID lab leak scenario, if true, illustrates another potential consequence of halting research. Gain of function may simply have been shifted from highly competent U.S. bioresearch facilities to the Wuhan Institute for Virology with its documented safety issues. Risky research should be performed under the safest conditions possible.

I have raised numerous questions here, so I will close with a story offering hope.

Recombinant DNA technology developed in the early 1970s. The potential to create new, deadly viruses was obvious. Leading researchers halted research and organized a conference to set risk thresholds and safety protocols for experiments. The protocols
have helped unleash biomedicine while maintaining safety.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H.
Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Don’t miss out!  Subscribe today to have Alabama’s leading headlines delivered to your inbox.