A robot equipped with artificial intelligence is seen at the AI Xperience Center in Brussels Feb. 19, 2020 (OSV News photo/Yves Herman, Reuters).

In the United States, a country that came of age with the Industrial Revolution, progress and the advance of technology have been so closely associated that many of us have trouble imagining one without the other. Nuclear weapons are perhaps the only American invention most of us really regret, and even those are regarded as historically inevitable and not without benefit as a deterrent to another world war. In any case, we are told (or tell ourselves), you can’t turn back the clock. Or stop it.

Now another, less obviously destructive invention threatens to disrupt our lives no less profoundly than the Bomb did. Artificial Intelligence (AI) appears to be on the brink of remaking our economy, our politics, and perhaps civilization as we know it. Its boosters predict that AI may soon be able to cure diseases such as cancer and mitigate, if not reverse, the effects of climate change. This, they explain, will be the first human invention capable of inventing things we could never invent for ourselves—things that promise to make our lives longer, easier, and more enjoyable. All we have to do is get out of its way.

But other experts say that is the one thing we must not do: if we want AI to serve us, they warn, we will have to make sure that we remain in control of it, which may be more difficult than it sounds. Last year, a survey of AI researchers found that half of them believe there is at least a 10 percent chance that future AI systems will cause the extinction or subjugation of the human race. Most people would probably steer clear of anything that had a 10 percent chance of killing or crippling them, but our tech sector and defense industry seem to be rushing heedlessly toward AI, afraid that, if they don’t, our geopolitical rivals will beat them to it. In retrospect, this may come to look less like the Cold War’s race to the moon than like a blindfolded foot race over a cliff.

Most people would probably steer clear of anything that had a 10 percent chance of killing or crippling them, but our tech sector and defense industry seem to be rushing heedlessly toward AI.

The edge of that cliff may be closer than most of us realize. In March, more than a thousand AI researchers and tech entrepreneurs—including Elon Musk and Steve Wozniak, the co-founder of Apple—called in an open letter for a six-month pause in the development of all AI systems more powerful than GPT-4, the “large language model” released by OpenAI earlier that month. Their letter warned that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” The letter’s signatories believe that the risks posed by these “digital minds” are considerable:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?… Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The open letter—together with alarming press reports of chatbots going rogue—seemed to get Washington’s attention. In May, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing on what, if anything, the federal government could do to manage the risks of AI. Appearing before the subcommittee, Sam Altman, the chief executive of OpenAI, and Gary Marcus, a signatory of the open letter, agreed that the AI industry needs to be vigorously regulated. “I think if this technology goes wrong, it can go quite wrong,” Altman said. “We want to work with the government to prevent that from happening.”

Altman resisted calls for a moratorium on the further development of AI, but Marcus argued that what’s needed is a moratorium not on the development of AI but on its deployment. That could give our elected leaders enough time to set up a new federal agency that would monitor and, where necessary, restrain the AI industry. Such an agency could also help design and implement a watermarking system to distinguish any content, verbal or visual, generated by AI. Meanwhile, state and federal legislators could pass new laws against the use of AI to propagate misinformation or to pass off what the philosopher Daniel Dennett calls “counterfeit people.” We should always be informed when we are dealing with a chatbot, and we should always have the option of communicating with a real person instead.

Finally, a moratorium on the further deployment of AI would give our whole society more time to prepare for what’s on the way. To prepare well, we will have to find an alternative to the blind enthusiasm and lazy fatalism that have too often characterized American discussions about technology. Between the short-term risk of AI-generated misinformation destabilizing our democracy and the long-term risk of AI turning us all into slaves, there is the medium-term risk that AI will make most of the workforce redundant. Is that really what we want? And if so, how will we make sure that we all have what we need to live good lives when most of us no longer have jobs? We cannot allow the people who stand to get rich from AI—or, worse, AI itself—to answer these questions for us.

Read more from the Editors here

Also by this author
Published in the June 2023 issue: View Contents

Most Recent

© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.