A bimonthly magazine on international affairs, edited in Germany's capital

“We’re Building Technologies That Are, by Their Very Nature, Dual Use”

SHARE
,

Toby Walsh, a professor of artificial intelligence at the University of New South Wales, discusses some of the risks artificial intelligence entails—including the possibility of an AI “arms race” —and what steps can be taken to mitigate them.

© REUTERS/Omar Sobhani

You’ve written quite a bit about the potential for an AI arms race. Are you more worried about AI falling into the hands of bad actors, or possible unintended consequences regardless of who “owns” the technology? Those are both dangers we should be worried about. There might be unexpected consequences, we might have flash crashes, we may have flash wars. Our systems, although well designed, could interact in strange ways with opposition systems, and that could result in a nasty feedback loop we didn’t intend, so we end up fighting a war when we didn’t want to.
So that’s one risk. Another is, as you say, the chance that they will fall into the hands of the wrong people, and will then be used against us. Many militaries are waking up to that possibility. The chief of the Australian army just went on record saying exactly that, that we should be very worried that these weapons could be used against us by our enemies. And even if we design them well and put ethical safeguards in, other parties will be quite happy to take those ethical safeguards off.

Many of the scientists associated with the Manhattan Project, which built the first atomic bomb, later regretted their work—they said that had they known what their research would create, they wouldn’t have participated. With AI, will there be a clear point when we realize we’re building something dangerous and have time to reconsider? Or will we make that leap without much warning? I think it’s great to look back at history, at instances like the Manhattan Project, and see if we can actually learn how we can go about managing technological change. The Manhattan Project is similar but different—I think many of the scientists with the Manhattan Project initially were motivated by a very worthy cause, which was the war going on in Europe and the horrors associated with that war. Of course, by the time it finished, the war in Europe was over, so I think their motivation changed. And they did call for the Japanese to be shown a demonstration of the bomb on an uninhabited island and told “We have more of these,” but the generals wanted to see the destructive impact they could have on a real city.
Interestingly—and again, to think of the historical precedent—there was a petition put out among the scientists. Of course, this was done in secret, as the project was a secret, and it was only discovered after the event. Obviously, the military didn’t listen to the scientists and decided to go ahead and drop the bomb. It’s possible that saved lives, but the counter-history is always difficult to discuss.
But there is a fundamental difference. When they were building the first nuclear bomb, they were trying to build an explosive device—the mother of all explosive devices. Here we’re building technologies that are, by their very nature, incredibly dual use. Many of us are working on building them for good ends, improving people’s lives and productivity, making us healthier, wealthier, and happier. But the same technology can be used for military ends. They could be used for good military ends to save people’s lives, and ends that I would consider less desirable, in terms of changing the nature of warfare and making it easier and faster to kill people.
So we want the technology, it will have immense benefits – but the same technology that will identify, track, and target an autonomous drone will be used to identify, track, and avoid pedestrians. You’d change one line of code.
That same technology is going to save lives—millions of people die in road traffic accidents every year, 30,000 people in the United States. So it’s going to be of immense benefit to society and safety and mobility. There are immense benefits that come with having autonomous vehicles. So we will go down that way, it’s too desirable not to develop. But the same technology can be repositioned, we can’t avoid that fact.
But we’ve seen this in many other settings. Chemical weapons are a good example: We didn’t ban chemistry, we banned chemical weapons. But the same chemistry that is used to make fertilizers is used to make explosives.

You signed on to the Asilomar Principles, a set of principles meant to represent a first draft of the kind of ethical regulations that might be used to govern AI development. Yes, I was at Asilomar for the conference that happened in January 2017 to discuss the responsible, ethical development of AI. That was a very interesting meeting. A lot of my colleagues were there, and the location of the conference was chosen for symbolic reasons—that’s where they had a previous meeting about gene editing and genetic manipulation, which resulted in a voluntary embargo on these dangerous technologies being developed by the scientists. It was an interesting meeting, and there was a lot of consensus about the responsible development of AI.
Since then there have been a number of other initiatives, like the IEEE initiative on ethics and the partnership on AI that was featured in the conference. So a number of bodies and institutions have come together to provide fora to discuss principles and practice concerning the ethical and responsible development for AI, which is something I think we should be worrying about.
In the past, we hadn’t worried about this. But in the past we didn’t have much of an impact on the world, so it didn’t really matter. But now, AI is becoming part of our lives—it will become like electricity, it will be woven into the fabric of everything, making decisions that impact people’s lives, whether they get given credit, let out of jail, many things that are really troubling. We need to think carefully about where and when we add AI to our lives. There’s a very nice article, a transcript of a speech given by the humanist Neil Postman, “Five Lessons for Technological Change.” He says that technology is a strange intruder into our lives, and we should only let it into those parts of our lives where it’s actually going to do us good. People think of it in a sort of mythical sense, as though it’s in some sense inevitable. It’s going to happen and we’re just going to have to adapt to it. But we actually get to make choices as to where the technology is let into our lives and how.

One of the principles agreed upon was that researchers and policymakers would work hand-in-hand to guide AI’s development. Do you think that’s happening enough? I should say that the principles are a work in progress. I have some reservations about some of them. Some of them are things where I think they’re not specific to AI, they’re things you’d say about any technological change that you were trying to develop. And perhaps we shouldn’t have specific principles for AI—perhaps they should fit into the general ethical frameworks we have for technological change, full stop.
But AI does introduce some interesting new things, like autonomy. We don’t have any autonomous systems in our world yet, and we will be building autonomous systems and we have to ask these questions. It’s been asked here today, for example, should robots be given rights? Because they’re autonomous, they’re part of our universe, and we may have to worry about responsibilities and rights and things like that. So they do introduce things that require us to make fresh ethical principles.
But the short answer is no, that’s for sure. I frequently get asked to talk to people involved in politics, and most of the time they’re trying to understand the technology themselves. There’s a language and science gap there to be filled.
The good thing is that people are waking up to this. They will find that there are many people in the academic and AI communities quite willing to spend time coming to fora like this and talking to politicians in strange places for scientists like the United Nations. As scientists we have a real responsibility—these are technologies that will impact everyone’s lives—so we have a real responsibility to ensure there’s an informed conversation, and that decisions aren’t made as they generally are now, by the developers and the technologists.

The last principle said recursively self-improving AI should be strictly controlled with strong safety guarantees. This was obviously a topic that worried the now-late Stephen Hawking. Do you think we’re too alarmed about this? Not alarmed enough? Appropriately alarmed? I think we’re too alarmed about this. We don’t know how to build recursively improving AI. We’ve never built any recursively self-improving system, despite all the amazing things we’ve managed to build in the universe. In fact, I recently wrote an article titled “Why the Singularity May Never Be Near,” which lists a dozen or so arguments – technical reasons mostly—as to why we may never end up building recursively improving AI.
That doesn’t mean we won’t actually build machines that are as intelligent as us, ultimately even much more intelligent. I just suspect we’ll do that the old fashioned way, which is through our own intelligence and perseverance and sweat. We won’t just design machines that magically improve themselves. A good analogy is ourselves: Despite the fact that we have a good understanding of learning and the brain, we haven’t changed the way we learn. It’s still just as painful. We haven’t improved our learning, we’re not recursively self-improving.

— interview conducted by Josh Raisher

The BERLIN POLICY JOURNAL was media partner of Aspen Germany’s “Humanity Disrupted” conference, held in Berlin.