A bimonthly magazine on international affairs, edited in Germany's capital

“AI Can Change the Balance of Power”

SHARE
,

AI is on the verge of becoming a critical part of our societies, says former State Secretary of Defense Katrin Suder. A debate over the changing threats and their impact on security policy is long overdue.

© Bundeswehr

How would you define artificial intelligence and why is it such an important topic for security? That’s a difficult question because we don’t even have a clear and broadly accepted definition for human intelligence. But I would say artificial intelligence is the attempt to recreate human intelligence—the ability to read, recognize patterns, answer questions, and so on—with machines. It’s an old dream in the history of mankind—think of golem in Jewish mythology, for example. In technical terms, AI means computer programs based on so-called deep learning algorithms. They mimic the structure of the brain in the form of neural networks which then are fed with large amounts of data. They are able to learn and adapt on their own…

…in order to replace humans? In some tasks and functions, yes, but completely? No. The type of AI we have now is called “weak AI,” a tool that can carry out specific tasks—for example, anticipating when a specific machine component fails (predictive maintenance), or running the voice control function on your cell phone. You can teach a machine to play the game “Go,” but it’s a long way from being able to play chess.
When you ask a machine a complex question, you might get “42” as a response—just like in the novel The Hitchhiker’s Guide to the Galaxy by Douglas Adams, when the computer is asked the “ultimate question of life, the universe, and everything.” Yet if someday the development of so-called strong AI succeeds and machines achieve abilities equal to or even superior to the intelligence of man, it would create a completely new reality that would affect all areas of life.
We are witnessing various developments coming together. When we talk about AI, we are essentially talking about four components: the algorithms or programs, the computing power, data, and then the people steering it—programmers and app developers. Looking at the latest in algorithms and AI, there haven’t been any revolutionary developments. I did my PhD on neural networks in the late 90s; the mathematic models are far better today and the networks are more complex, but innovations in methodology alone do not indicate a quantum leap.

So what would be a quantum leap? In addition to the development of strong AI that I already mentioned, quantum computing would be another non-linear leap. In terms of cryptology, quantum computers would change everything overnight. Take encryption that we’d currently need a million years to crack—a quantum computer could crack it in a millisecond. Everything will happen at unprecedented speed.

And that would affect security policy as well? Yes, in a fundamental way! Image what would happen if all encryption is suddenly insecure. But back to AI: there is significantly more data now because we have sensors everywhere. Everything is connected—there are chips in our cell phones, our cars, our cameras, and soon even our clothes. At the same time, there is plenty of low-cost computing power to process these huge amounts of data.
AI lives on data to learn and adapt. That is what an AI does – it processes and matches vast amounts of data, getting better and better at solving specific problems in the process. New applications emerge almost daily, including in the military sector with corresponding security policy implications. AI is a central component of the “digital battlefield” or, to put it in more dramatic terms, AI can be used as a weapon.

And that brings us to the controversy over “killer robots,” as they’ve been called… It’s important to be clear here: what are killer robots? Ultimately we’re talking about autonomous weapons systems. And of course, the automation of individual weapon system functions is already happening today, from temperature regulation to flight stabilization. The Eurofighter jet has more than 80 built-in computers, and few people have a problem with that. What’s really at stake in this debate is the autonomous use of kinetic force against humans. And again it is important to be clear in the definition here. The air defense systems on naval ships, called Rolling Airframe Missile (RAM), also shoot automatically and adjust to their targets autonomously. But those targets are not humans—they are other missiles approaching at high speed, and RAMs are far superior to humans in their precise ability to respond. The majority doesn’t consider that problematic, either.
The key question is whether kinetic violence against humans can be decided autonomously. The German government has clearly said no—there always needs to be a person is involved in such a decision. What other countries do is unfortunately not under our control. But Germany has ruled it out and is calling, rightly so, for more international regulation, as difficult as that may be. The rapid pace of technological development is constantly generating new questions and gray areas.

What developments do you expect to see on the digital battlefield or with AI used as a weapon? There are more and more sensors on the battlefield, but also satellite images, internet data, mobile data, and so on. By digitizing, processing, and presenting all that data, one can gain a competitive advantage. Those who have better information, who manage to put all that information together, win. They have a perspective on who the attacker is, how the attacker is equipped, and so on.
But conversely, the more interconnected or digital a system is—whether it’s the Eurofighter or the Puma armored car—the more vulnerable it is. Digitalization means everything is connected digitally, and the downside is the existence of cyber threats: everything can be hacked. That’s why cyber security—protecting against attacks on computers and programs—is so important. That brings us to the question of what role AI plays in cyberspace. AI can be used as a tool to fend off cyber attacks, and it can detect attack patterns. Whoever manages to develop the best AI will have an advantage in defending and attacking. As with any technology, it’s all about supremacy. We find ourselves in the middle of a global competition, particularly between the US and China. Beijing published its AI strategy about a year ago. It is a very ambitious plan that aims to make China a world leader by 2025.

Google’s AlphaGo program beat Ke Jei, one of the world’s best players, at a game of Go in May 2017. That was considered a sort of wake-up call for the Chinese, wasn’t it, on par with the Sputnik shock of 1957? Yes, I think it was. There is a glut of data in China; people there appear to be more willing to relinquish their data. China has a different relationship to privacy and data protection. And highly-developed sensors and processors are everywhere, in cell phones, cameras, computers, etc. There are around 1.5 billion people in China, and many are very tech savvy—early adopters who take every new innovation on board. The West needs to reconsider its attitude towards China. The theory has been that the Chinese can only copy, not innovate. But that image needs an urgent overhaul. The focus in AI right now is on implementation, and China can do that in a big way. When the Chinese want to achieve something—well, just look at the Belt and Road initiative.

Who is actually driving development in AI—is it governments, or is it multinationals like Google or Apple? That lies at the core of many AI debates, in particular the question of what Germany and Europe’s path should be compared to the US, where primarily companies drive innovation, or China, where the state steers developments. It is important to design how we want to deal with data, from regulating access to data for instance in the public sector to data science in schools. This needs to be done with transparency and with a balanced perspective on both the opportunities and risks.
Besides the US and China, are there other leading AI countries? Russia’s President Vladimir Putin said recently that whoever leads on AI will rule the world… I’m afraid that’s true. I can’t adequately assess Russia’s skills. But it’s clear that we have a state actor that is very active in information and cyberspace.

Can the development of AI be compared to the invention of the nuclear bomb? AI definitely has the potential to change the dynamics in cyberspace and the balance of power. This goes to the very core of security, especially because we have not yet been able to establish international regulations or controls. And there are other aspects that could further shape security policy and also need to be considered: AI is changing the economy as well. What happens when a country is economically superior or even has a monopoly because of AI? What are the implications for global value chains?

Historically speaking, technological innovations often change all aspects of society. What is special about AI? That’s correct, every industrial revolution has also had an impact on security. But today, things are moving much faster. When the assembly line was created, for example, there was a clear impact on the defense sector as well – you could produce weapons much faster. Or when airplanes were invented, airspace took on a military dimension as well.
But AI’s technological development has a far more immediate and broader impact globally. It’s as if you replaced your bow and arrow with a state-of-the-art fighter jet that doesn’t cost much and easily goes unnoticed. That is why AI worries me so much—especially because a terrorist group could hijack these technologies. The potential for abuse is enormous. Abusing AI costs nothing, and it isn’t immediately clear when someone develops or steals AI. You don’t see, hear, or smell anything, and you can’t see it on a satellite image.

Are you talking about physical attacks, on infrastructure? Or psy-ops that influence public opinion? Everything. You have to look at the whole range. Policymakers in security have to be ready for all sorts of scenarios. I’m most concerned by the real, physical impact we’ll see when encryption or security systems are cracked. An opponent could derail trains or control medical devices or, as was saw in Ukraine’s energy grid, simply turn the lights off. The scenarios are endless and potentially devastating.

Is the German government taking the problem seriously enough? Yes, it is. Look at what we saw happen in the Bundeswehr over the last parliamentary period. Cyber has been established as an independent military branch, with the build-up of a cyber command center; there were innovative experiments like the Cyber Innovation Hub and the cyber degree programs at the Bundeswehr University in Munich as well.

Will that be enough? That’s hard to tell. But ultimately it’s just like developing a new European fighter jet. The Chinese and Americans are doing things on a completely different level. But does that mean we shouldn’t develop our own? No—we should.

Do we in the West need to reconsider our privacy policies? I think we need to discuss how we deal with data and especially algorithms. The crucial question is: how do we make sure we know what the algorithms are doing? Who controls the algorithms? This requires a broad discussion, and it’s also a security issue. Take the example of early crisis detection—if an algorithm tells us: “There is 35 percent chance that a crisis will erupt in country in eight months’ time.” What do we do with that information?
We ultimately need more social debates. At the moment there are often undifferentiated perspectives—sometimes ignorance or even flat refusal to deal with the issues at hand. But there is no way around digitalization. We have to talk about data and algorithms, about the future of work, and education. And how we want to live together, in a world full of AI.