A bimonthly magazine on international affairs, edited in Germany's capital

The AI Revolution

SHARE

At the Aspen Institute’s “Humanity Disrupted: Artificial Intelligence and Changing Societies” conference, there’s a lot of talk about AI’s potential—and a lot of worry about responsibility.

© Landesvertretung Baden-Württemberg

Artificial Intelligence (AI) offers exciting potential but also requires policy oversight—the kind of policy oversight that, unfortunately, no government can realistically provide on its own. When Volker Ratzmann, state secretary for the federal state of Baden-Württemberg, opened the Aspen Institute’s “Humanity Disrupted: Artificial Intelligence and Changing Societies” conference at Baden-Württemberg’s representative offices in Berlin, he started with what would become an oft-repeated theme.

“AI presents huge challenges that we have to manage and regulate together,” Ratzmann said. “No country can manage this individually.” Kent Logsdon, the American Embassy’s chargé d’affaires, echoed this sentiment, saying that while governments should not centrally direct AI, there are also “serious issues to be discussed, and many will require leadership to seize benefits while managing risks.”

Over two days, a mix of researchers, industry representatives, and policymakers returned again and again to what humankind could achieve with AI and machine learning over the next several decades —the potential advances in medicine, transportation, and manufacturing, to name only a few—but also to the dangers it could unleash if left unregulated.

Joanna Bryson, a professor at the University of Bath and the Princeton Center for Information Technology Policy, emphasized the complete decoupling between productivity and wages that technology has contributed to, pointing out that as tech disassociates work from the location it is performed in, it will have to be regulated more by international treaty than national regulation. Nicola Beer, a German lawmaker and secretary general of the Free Democratic Party (FDP), added that AI would create as many jobs as it destroyed—or even more—but that it would also call into question the most basic functions of government, changing how we look at concepts like taxation and finance.

But oh, those self-driving cars. Autonomous vehicles also took center stage in several discussions. Anne Carblanc, the head of the OECD’s Digital Economy Policy Division, noted that self-driving cars could cut transportation costs by 40 percent, but will endanger 2.2 million to 3.1 million jobs in the US alone over the next two decades. From the industry perspective, meanwhile, Jeff Bullwinkel, an Associate General Counsel with Microsoft Europe, said the international computing giant shares these concerns—is optimistic that, with proper oversight, the benefits could be enjoyed even as the trade-offs are managed.

It was unclear, however, how exactly that oversight would work, or where it would come from. The researchers working on AI pointed out that artificial intelligence is itself only a tool, and that its ramifications will depend largely on how human beings apply it; in other words, AI doesn’t displace people, people do. Thus, it would be essential that policymakers take the lead on monitoring the development of our increasingly automated society and ensure that any threats that arise are addressed.

The policymakers, on the other hand, often pushed the same responsibility back to the tech sector, saying that it was the duty of companies to regulate their technological advances, and the responsibility of researchers to build accountability into their machines.

The Innovation Gap

In a panel titled “Driving Innovation: Autonomous Vehicles and the Future of Rail and the Open Road,” the consensus—as one might expect from a panel composed of AI researchers and transportation specialists—was that machine learning will dramatically enhance the efficiency of transportation networks. Yet Magnus Graf Lambsdorff, a partner at the venture capital company Lakestar, said he was less worried about the ramifications of autonomous vehicles and more worried that Germany, famous for its automotive industry, will not be the primary beneficiary: “The amount Germany invests [in AI research] is vanishingly small compared to in France or the United States.”

That was a sentiment heard often at the conference—Germany in particular has the skills to be a major global player in AI but, as a late adopter to all things digital, has failed to invest funds, time, and energy in the field. In a panel titled “Is Germany Ready for the AI Revolution?” there was much hand-wringing over why Silicon Valley and China have surpassed Berlin in shaping the technology of the future (and present, for that matter). MP Thomas Jarzombek noted that Germany lags behind in data processing and knowledge transfer, adding: “The problem is Germans love hardware. We’re a great engineering country but we’re behind on software.” (sic)

That might be headed for change on the European level, at least. A European Union representative delivered the news that the Commission will set up a European AI alliance as a multi-stakeholder forum, hoping it will become a global platform that will, among other developments, generate a comprehensive set of ethical guidelines for deploying AI. How to implement those guidelines (and get all 27 EU members to sign on) is another question.

In one of the more ominous sessions, a panel of defense representatives and researchers discussed whether AI has brought about the third revolution in warfare; Toby Walsh, a professor of computer science at the University of New South Wales, warned of the dangers of allowing “stupid machines the right to decide over life and death,” adding that autonomous weapons systems will be weapons of terror: “You can ask them to do anything, however evil.”

Both the head of the German military’s future analysis branch, Olaf Theiler, and Frank Sauer, a researcher at the Bundeswehr University in Munich, appeared to agree that the use of AI in analyzing data or visualization was welcome—anything beyond, however, including operational decisions and tactical planning, needs humans to play an active role. It remains to be seen how governments like Germany can and will react when adversaries—whether they be non-state actors or other governments—choose to employ autonomous weapons systems and other AI applications in conflict and warfare.

There was broad consensus, across all the discussions and panels, that AI has already become a part of our lives—think search engines, as Joanna Bryson pointed out, or transcription software. Now it will take inclusive dialogue between society, politics, research, and industry to decide what we want to do with artificial intelligence, and quickly. Because as Frank Kirchner of the German Research Center for Artificial Intelligence in Bremen put it: “There is no law of nature that says that robots can’t become more intelligent than us—we have to make sure we don’t become less intelligent.”

NB. Berlin Policy Journal was a media partner for the Aspen AI2018 Berlin conference.