<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial Intelligence &#8211; Berlin Policy Journal &#8211; Blog</title>
	<atom:link href="https://berlinpolicyjournal.com/tag/artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://berlinpolicyjournal.com</link>
	<description>A bimonthly magazine on international affairs, edited in Germany&#039;s capital</description>
	<lastBuildDate>Mon, 04 Nov 2019 15:10:41 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.2.7</generator>
	<item>
		<title>What China’s &#8220;Chips Endeavor&#8221; Can Teach Europe</title>
		<link>https://berlinpolicyjournal.com/what-chinas-chips-endeavor-can-teach-europe/</link>
				<pubDate>Mon, 14 Oct 2019 14:41:34 +0000</pubDate>
		<dc:creator><![CDATA[Kaan Sahin]]></dc:creator>
				<category><![CDATA[Eye on Europe]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Alibaba]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[High Technology]]></category>
		<category><![CDATA[Huawei]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=10954</guid>
				<description><![CDATA[<p>China’s efforts to develop its AI chip industry could provide Europe with important lessons.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/what-chinas-chips-endeavor-can-teach-europe/">What China’s &#8220;Chips Endeavor&#8221; Can Teach Europe</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>China’s efforts to develop its AI chip industry could provide Europe with important lessons for building its own industry and making it globally competitive. </strong></p>
<div id="attachment_10953" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT.jpg"><img aria-describedby="caption-attachment-10953" class="size-full wp-image-10953" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/10/RTX74HIU-CUT-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-10953" class="wp-caption-text">© REUTERS/Stringer</p></div>
<p>In its quest for technological supremacy, China has a specific soft spot: its chip industry.</p>
<p>Beijing semiconductor efforts in the recent years and decades have not born fruit. Be it microprocessors, memory chips or mobile processors, in all of these fields the country is still not capable of developing its own top-notch assets on a large scale. Consequently, there is still a high reliance on chips produced by the semiconductor market leaders, normally situated in the US (e.g. Intel) as well as in Taiwan (e.g. Taiwan Semiconductor Manufacturing Co, or TSMC) and South Korea (Samsung Electronics).</p>
<p>This high level of dependence became very clear recently: In 2018, after the Chinese smartphone manufacturer ZTE was accused of illegally selling US equipment to Iran and North Korea, the US Department of Commerce imposed a ban on American companies selling their products to the company. Unable to buy chips from American chip makers like Qualcomm, ZTE teetered close to bankruptcy.  From a Chinese perspective, the last straw came when the same procedure was used in May, this time with telecoms giant Huawei as target.</p>
<p>These actions served as a wake-up call for China, pushing it to put greater efforts into achieving technological self-sufficiency—a claim already made by President Xi Jinping. This blatant exposure of China’s vulnerabilities in the global supply chain feels particularly painful for decision-makers in Beijing since Chinese strategists themselves pursue precisely this approach—pushing others to become technologically reliant on China and weaponizing this “interdependence” to exert economic and political pressure when required.</p>
<h3>Untapped Market Potential</h3>
<p>To reduce its reliance on foreign semiconductor industries, China has set up initiatives and funds to counteract the trend. However, according to Gu Wenjun, chief analyst at Shanghai-based semiconductor research company ICWise, it will take up to 40 years for China to reach self-sufficiency in many areas of chip production.</p>
<p>Although China has most probably lost the battle on “traditional chips” for the time being, it might win another one: In line with its ambitious and aggressive efforts to become an AI superpower, Beijing has recently started to cast an eye on AI chips. These chips are specifically designed to process and compute machine learning algorithms at a faster pace and are optimized for AI-specific functions, be it in the context of autonomous vehicles or robots as well as within the framework of cloud computing services or data centers.</p>
<p>For instance, this September, Chinese tech giant Alibaba officially entered the AI chip market by presenting the Hanguang 800. According to the company, this AI chip can shorten computing tasks that would usually last one hour down to a couple of minutes. Just one month earlier, Huawei presented its first commercial Ascend 910 AI computing chip. Other tech companies and start-ups such as Baidu, Tencent, Bitmain or Horizon Robotics intend following suit to capitalize on a niche in the semiconductor industry that still possesses market potential on an international scale.</p>
<p>Even though their American counterparts such as Google and Facebook have also already entered the “AI chip race” (at least for in-house purposes), no clear leader can be perceived so far, which gives Chinese companies a chance to successfully exhaust this untapped market potential.</p>
<h3>Role Model for Europe</h3>
<p>At first glance, one could simply regard this as yet another field where China will take a bold step in its efforts to solidifying its position as AI superpower. However, Europe can learn from the Chinese approach when it comes to its own endeavors to catch up in the global AI power game. It can detect the areas within the AI industry (or in the technological realm in general) where there are opportunities to gain ground or even to take the lead globally by benefiting from the first mover advantage—as the US and China have done when it comes to many AI-related components, which have been missed by Europe in the past.</p>
<p>One such area, for instance, could be using AI systems to process machine and engine data (temperature, pressure, rotor speed, etc.), which has a strong industrial base in Europe. With such an approach, Europe could combine its strengths in the physical world (i.e. its manufacturing industries) with AI technologies, also in the context of the increasing data generated by the Internet of Things (IoT). On the other hand, heavy investments in consumer data would most probably mean fighting a losing battle with the US and China.</p>
<p>There are also other data types where Europe could showcase its strengths: According to a study by the Center for Data Innovation, public health data can be leveraged on a large scale within the EU and could provide an opportunity for fueling further AI developments. Another related opportunity for Europe could be to focus on the quality of data, which can compensate for the lack of quantity to some extent. China, for instance, is said to have weaknesses in compiling structured data. However, in order to build on these scenarios, private sector data-sharing approaches in the business-to-business and business-to-government areas must be further supported by institutions such as the European Commission.</p>
<p>Against this backdrop, the European Union and its member states still have a chance to gain ground in the global AI industry. Within this context, however, the debate concerning AI ethics for Europe as a unique selling point is important but not sufficient. In combination, however, with a related thriving industry or at least with certain strong points in the European AI ecosystem, such a human-centered digital area can be developed to its fullest potential in order to compete with the US and China.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/what-chinas-chips-endeavor-can-teach-europe/">What China’s &#8220;Chips Endeavor&#8221; Can Teach Europe</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>The Devilʼs in the Detail</title>
		<link>https://berlinpolicyjournal.com/the-devils-in-the-detail/</link>
				<pubDate>Thu, 29 Aug 2019 10:17:10 +0000</pubDate>
		<dc:creator><![CDATA[Kaan Sahin]]></dc:creator>
				<category><![CDATA[Berlin Policy Journal]]></category>
		<category><![CDATA[September/October 2019]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=10565</guid>
				<description><![CDATA[<p>A flurry of AI ethics guidelines have been published this year, by the EU, the OECD, and Beijing. But there are many stumbling blocks ahead before binding rules can be implemented.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/the-devils-in-the-detail/">The Devilʼs in the Detail</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p class="p1"><strong>A flurry of AI ethics guidelines have been published this year, by the EU, the OECD, and Beijing. But there are many stumbling blocks ahead before binding rules can be implemented.</strong></p>
<div id="attachment_10573" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online.jpg"><img aria-describedby="caption-attachment-10573" class="wp-image-10573 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2019/08/Sahin_Online-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-10573" class="wp-caption-text">© REUTERS/Jason Lee</p></div>
<p class="p1">Artificial intelligence (AI) is turning into an essential enabler for economic and military affairs. It has also become the tool of choice for surveillance activities in certain countries. Against this backdrop, governments, international organizations, and corporations have been drawing up guidelines on the ethical design and usage of AI algorithms and data.</p>
<p class="p3">In 2018, major technology companies already drafted related principles, which is hardly surprising as AI innovations nowadays mostly originate from the private sector. Google published <i>AI at Google: Our Principle</i>, while Microsoft wrote <i>Microsoft AI Principles</i>. Yet their data-driven business model and their commercial interest in AI fuel distrust. Critics accuse them of “ethical white-washing.” The reproach is that their published guidelines are nothing more than a marketing gimmick which aim to distract from their abusive and massive application of AI algorithms.</p>
<p class="p3">Irrespective of whether these accusations are true or not, there is an urgent need for stakeholders other than “profit-driven players” to become genuinely engaged in the AI ethics debate. In April 2019, the European Commission released its “Ethics Guidelines for Trustworthy Artificial Intelligence.” These guidelines were drafted by the 52-member High-Level Expert Group on Artificial Intelligence (HLEG AI), which consists of representatives from politics, industry, research institutions and civil society. The document encompasses seven guiding principles, among them transparency (the traceability of AI systems should be ensured), privacy and data governance (citizens should have control over their own data) and diversity, non-discrimination and fairness (which tackles the bias problems of AI systems).</p>
<p class="p3">In May, the Organization for Economic Co-operation and Development (OECD) released its AI ethics guidelines, the “Recommendation of the Council on Artificial Intelligence.” Even though the document is shorter than the EU one and lighter on detail, its principles are noticeably similar. Later that month, the Beijing AI Principles were announced by the Beijing Academy of Artificial Intelligence (BAAI)—an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government—in a joint effort with several Chinese research institutions and industrial groups involving firms like Baidu, Alibaba and Tencent. In comparison with the guidelines provided by the EU, these principles are more descriptive and less comprehensive. However, they cover three crucial clusters: research and development, use, and governance.</p>
<h3 class="p4">Promising Signals</h3>
<p class="p2">At first sight, it is a welcome development that major international organizations and powerful states are officially looking at ethical concerns about AI. And indeed, it is possible to identify positive aspects for each of the released AI guidelines and their wider significance: the EU document has great scope and is deliberately defined as a living document to be reviewed and updated over time. Given that AI systems are subject to constant changes and need continuous adjustment, such a mechanism is indispensable. The EU also includes a checklist with easy-to-understand questions that companies can used as points of orientation to ensure that ethical concerns are respected.</p>
<p class="p3">With regard to the OECD recommendations, it worth noting that—even though it is a non-binding document—it is backed by the United States. This means that the Trump administration is officially voicing ethical concerns about AI at an international level, despite its skepticism toward multilateralism. In addition, these recommendations are not limited to the 36-member states of the OECD—six non-members having already also embraced these principles. As a follow-up measure, an AI Policy Observatory will be established to help implement and monitor adhesion to these principles throughout the world. Based on these recommendations but with a more limited scope, the G20 meeting in Japan this June agreed a set of G20 AI Principles. Both the US and China were signatories.</p>
<p class="p3">Last but not least, there the promising sign of the Beijing AI Principles. It was surprising and gratifying to see that China’s government—which is widely criticized for using AI as a tool to monitor and grade citizens—is suddenly interested in ethical concerns and that, for instance, research and development of “AI should serve humanity and conform to human values.” This can be interpreted as a signal that China wishes to become engaged in a dialogue with international partners in spite of the increasingly powerful narrative of an “AI race” with the United States.</p>
<h3 class="p4">Stumbling Blocks Ahead</h3>
<p class="p2">Nevertheless, it would be premature to speak of a new era of AI multilateralism and an effective AI ethics framework. The recent haste in drafting AI guidelines is partly motivated by the desire not to be left out of the conversation and the “standard setting game.” It marks the start of a likely long-running debate within the international community, with many stumbling blocks ahead. A small sample of these lingering challenges follow:</p>
<p class="p3">First, the devil will be in the detail, as the principles presented by all sides are still very vague. Even the most comprehensive and detailed guidelines—the ones drafted by the EU—fail to set non-negotiable ethical principles or so-called “red lines.” This was even criticized by one of the members of the HLEG AI, the philosopher Thomas Metzinger. At present, all of the three principles are more about opening up new thematic areas such as non-discrimination or robustness and safety to international discussion. Taken together with the fact that none of these principles are enforceable by law, it means that countries continue to have a lot of room for maneuver in their application of AI systems.</p>
<p class="p3">Second, the application possibilities for AI are too widespread for a one-fits-all approach. Different circumstances require different solutions. More specific application areas like manufacturing, surveillance, and the military need additional guidelines.</p>
<p class="p3">Third, ethics is always embedded in a cultural and social context that depends on a system of values shaped by a unique history. Since algorithms will impact many areas of our everyday lives, these cultural differences must be taken into account when drafting AI ethics. For instance, studies show that people in China and in the West have quite different responses to the famous “Trolley Dilemma,” a thought experiment in ethics that forces participants to make a difficult choice between a greater and a lesser evil.</p>
<p class="p3">Ultimately, such culture clashes will also be reflected in international politics. It will be a huge challenge to find common ground, especially if the international community seeks to develop more detailed principles and guidelines. Bringing in additional stakeholders and transferring what are ultimately ethical principles into hard law will be just as difficult.</p>
<h3 class="p4">Great Power Rivalry</h3>
<p class="p2">Finally, and in addition to the challenges related to process, content, and implementation, there is a need to take the geopolitical context into account. This is true for the new technologies in general, but especially for general-purpose tools like AI. The great power rivalry between the US and China has only just begun, and emerging technologies with dual-use nature will be the main driver for economic profitability and military prowess. Hence, it is highly doubtful whether the so-called AI superpowers—first and foremost Beijing with its current demonstration of AI-based surveillance on minorities—will be willing to bind themselves in “ethical chains” through a self-imposed ethics regime. This is made evident by the reluctance of these countries to ban lethal autonomous weapons systems.</p>
<p class="p3">That is why it’s imperative that the EU continues to take the lead in the global debate on AI ethics in order to see the emergence of its “third way”—a digital sphere that is human-centered, regulated, and democratic. Yet setting high ethical standards is not enough. The EU and its member states also need to do more to establish a vibrant European AI ecosystem. This means not just encouraging additional investment, but also, among other measures, supporting European companies that develop AI systems. Otherwise, the EU will end up proclaiming and promoting detailed and sophisticated AI ethics guidelines without having any leverage to implement them internationally.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/the-devils-in-the-detail/">The Devilʼs in the Detail</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Everything is AI</title>
		<link>https://berlinpolicyjournal.com/everything-is-ai/</link>
				<pubDate>Thu, 28 Jun 2018 13:40:37 +0000</pubDate>
		<dc:creator><![CDATA[Ludwig Siegele]]></dc:creator>
				<category><![CDATA[Berlin Policy Journal]]></category>
		<category><![CDATA[July/August 2018]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[World Politics]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6914</guid>
				<description><![CDATA[<p>In the coming years, and all across the world, AI will shape politics, the economy, and society. It will also disrupt international affairs. There ... </p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/everything-is-ai/">Everything is AI</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>In the coming years, and all across the world, AI will shape politics, the economy, and society. It will also disrupt international affairs.</strong></p>
<div id="attachment_6859" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online.jpg"><img aria-describedby="caption-attachment-6859" class="wp-image-6859 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Siegele_online-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6859" class="wp-caption-text">© picture alliance/AP Photo/Jeff Chiu</p></div>
<p>There are many other networks in the world&#8230; But the internet is a network that magnifies the power and potential of all others.” When Hillary Clinton, then US Secretary of State, tried to explain the importance of the internet for foreign and security policy in January 2010, many experts thought it was a waste of time. Should this really be a priority of US foreign policy? After all, as the general thinking went, there were more important issues, like the earthquake in Haiti or global terrorism.</p>
<p>Over eight years later, after the revelations of Edward Snowden, the digital offensive of the so-called Islamic State and the debate over “fake news,” hardly anyone is dismissing the internet as a distraction from serious foreign policy—even if many no longer regard the internet as a great blessing. (Clinton herself discovered the Internet’s dark side in the most painful of ways during the 2016 presidential election.) Foreign policy without the internet is no longer conceivable. This shift in opinion should be a warning to all those who today dismiss another technology as irrelevant for foreign policy: Artificial intelligence (AI). Just as the internet has pervaded politics, the economy, and society over the last ten years, AI will in the next ten years appear everywhere and disrupt everything. Any country that tries to ignore this development will lose relevance.</p>
<p>Looking around the Berlin foreign-policy world and beyond, one quickly gets the impression that for many people, AI is still uncharted territory. People either think it’s a new buzzword from California —“Yesterday it was Big Data, right?” Or they spread the word that the world’s going under. Soon, they say, AI will create an all-powerful super intelligence that will try to subjugate humanity, like in the action movie “Terminator.”</p>
<p><strong>Better than Humans</strong></p>
<p>Misperceptions like this are based on an incomplete understanding of what AI actually is. It is often confused with full (strong) Artificial intelligence which is seen as superior to human thinking. But that will remain Science Fiction for the foreseeable future or even forever. The AI of today is better understood as a “collective intelligence”: it is often about the digital extraction of data created by humans. Machine learning, a category into which nearly all important AI technologies fall, usually has two stages. First, neural networks—statistical systems inspired by the human brain—are fed vast amounts of data (for example cat pictures), so that they learn to recognize patterns (what cats look like). Then in the second stage, new data is presented, to which the networks apply what they have learned. Simply put: unlike other software, AI code isn’t written by programmers, but by data.</p>
<p>Thanks to the huge computational power of cloud computing firms like Amazon and Microsoft, AI services can often already recognize objects and language better than humans can. The best facial-recognition software already has 99 percent success rate at identifying faces, though only in laboratory conditions. Language recognition services achieve results that are nearly as good. Other programs can comfortably read scrawled handwriting, once they’ve digested about a hundred pages of it.</p>
<p>Companies can apply basic services like this to complicated tasks, as online-giant Google showed at a conference in California in May. Its latest AI-design, Google Duplex, was able to book a haircut appointment on the phone without the other person realizing that he was talking to a machine. For this demonstration, Google had to combine at least three AI services: language recognition, sentence understanding, and word formation.</p>
<p><strong>AI as a Growth Factor</strong></p>
<p>Big technology firms dominate the AI industry. They possess the most and the best data, programmers, and computer systems. But recently other businesses have begun to make use of AI, too. For example, the clothing retailer H&amp;M uses it to detect fashion trends. Unilever, a producer of household and consumer goods, utilizes AI to evaluate job applicants. The energy company Repsol wants to use AI to make its refineries more efficient, while Siemens is using it to optimize the operation of its gas turbines.</p>
<p>It’s hard to predict how much growth AI will create. But the figures will not be small. Accountants from PricewaterhouseCoopers estimate that by 2030, AI will increase world economic output by 16 trillion dollars—more than China and India combined generate today. That figure is about five times the German GDP. According to Ajay Agrawal, Professor at the University of Toronto and author of the new book “Prediction Machines,” the most important economic effect of AI is that it will sharply reduce the cost of prediction, thus making businesses more productive. Just as electricity made light much cheaper—prices soon became 400 times cheaper than at the start of the 19th century—AI will make it much easier to look into the future.</p>
<p>“AI is like electrical power,” experts say. One day it will be used everywhere. And it’s only a matter of time until that is also the case in foreign and security policy—and in the German foreign ministry. But in what form? In a study released in early 2018, the Berlin think tank Stiftung Neue Verantwortung (SNV) identified three focal points: autonomous weapons, economic effects, and consequences for democracy and society.</p>
<p>Autonomous weapons, which take decisions on their own with the help of AI, are perhaps the most threatening consequence of technological development. They can take various different forms, from automated hacker attacks to self-controlling drone swarms. They raise a number of difficult questions, not least how much control of these systems humans can and should have. Arms experts in the US fear ethical asymmetry above all: countries like China could complete dispense with the “human in the loop,” while Western states potentially refuse to cross this red line.</p>
<p>After long internal discussions, Google decided in May to end its participation in Project Maven, a Pentagon program to develop software that can distinguish between people and things in images made by drones. For Chinese firms like Alibaba and Tencent, there are few concerns about such programs, because these companies are already involved in the “civil-military fusion,” as the government in Beijing calls its close cooperation with tech firms.</p>
<p><strong>AI‘s Impact on Policy</strong></p>
<p>As far as the economic consequences of AI are concerned, the effects on foreign policy are hard to evaluate. The technology could help some countries skip over entire levels of development. China wants to become the world leader in the industry and plans to build an AI-economy worth nearly 60 billion a year by 2030. Other countries will lose out, Germany probably among them. The country is considered a straggler. There are fears everywhere that many jobs will be lost, though such fears are probably exaggerated. The McKinsey Global Institute estimates that only five percent of all professions can be automated away using currently known technology (though machines could do part of the work for more than half of all activities).</p>
<p>The question of economic concentration comes up more and more: Amazon, Google, Facebook, and Apple are already dominant worldwide, and AI could make them even more powerful. Denmark’s decision to send an ambassador to Silicon Valley was dismissed in many capitals as a publicity stunt, but it could prove to be forward-thinking.</p>
<p>AI’s impact on democracy and society will likely present major challenges for foreign policy. The internet has already shown that human rights and technology don’t always fit together. While Hillary Clinton in 2010 praised “the network of all networks,” the NSA used it to wiretap masses of people worldwide, as the revelations of former NSA employee Edward Snowden showed a few years later.</p>
<p>AI makes these “Snowden contradictions,” as the authors of the SNV study call them, even more clear. The technology isn’t just the perfect surveillance tool: video cameras equipped with special chips already follow people automatically. AI can also be used for mass manipulation that goes well beyond the most recent disinformation campaigns. American researchers recently discovered that the Chinese government is the source of nearly 450 million online comments per year whose main goal is to distract. Most of them are still written by humans, but in the future more and more artificially intelligent bots could be used.</p>
<p>As well as such fundamental problems, practical issues arise. Data is the most important raw material for AI. China possesses the world’s deepest data pool, especially when it comes to consumers. The country’s 772 million internet users are open to new things and ideas: many of them don’t carry cash in their wallets and only pay with their smartphones. Other countries, particularly Germany, are much poorer in data for cultural and legal reasons. In the future, data, like other resources, may be managed on a national level. And data protectionism is an ever-growing problem for global firms, as the Financial Times recently reported. The number of laws that forbid firms from exporting data has nearly tripled in the last ten years.</p>
<p>Finally, there’s the question of how the German foreign ministry and related institutions will themselves use AI. A study published in June by the British think tank Chatham House lays out three possible applications for governments: AI could create models of international negotiations, thus simplifying them; it could predict geopolitically important events; and it could help assess compliance with international arms control treaties. At least in the last two cases, we are no longer talking about the future. Record Future, a Swedish-American firm, is already using machine learning for early recognition of hacker attacks and other threats. And software from Palantir, a Silicon Valley firm partially funded by the Pentagon, helps inspectors from the Vienna-based International Atomic Energy Agency (IAEA) in their work in Iraq.</p>
<p><strong>Germany the Laggard</strong></p>
<p>How can politics react to all these challenges? A good AI foreign policy begins with a good AI domestic policy. A number of countries have made this key technology a national priority and published detailed strategy plans. Among them are the US and China, but also France, South Korea, and even smaller countries like Finland. By contrast, the government in Germany—a country that struggles with digitalization in general—has only now begun to seriously engage with the issue.</p>
<p>In terms of the development and use of AI, Germany is average at best. According to a response to a parliamentary inquiry, the federal government spends about 27 million Euros a year promoting AI research, which appears quite low compared to other developed nations, though there are no exact figures for comparison. And with few exceptions, German companies are not at the front of the pack: in early 2018, the Expert Committee on Research and Innovation came to the conclusion that other countries are “much more dynamic” in many areas of AI.</p>
<p>Not being a pioneer means that Germany can learn from other countries’ experiences, the SNV argued in a separate paper published in early June (“Cornerstones of a national strategy for AI”). The federal government, the study said, should have greater ambitions than just better promotion of individual AI technologies. Rather, it “should focus on building and supporting a strong, internationally competitive AI ecosystem.”</p>
<p>In short: we need a stool with lots of legs. Promoting research is certainly one of them, but probably not the most important. More important is a strong base. AI competence can’t only be taught in computer science; it also has to be included in other courses of study. Sufficient computational power and venture capital have to be more easily available. Instead of concentrating on the quantity of data, as China and the US do, Germany should prioritize the quality of data, since smaller quantities of relevant, well standardized data often achieve better results. If the mixture is right, an ecosystem like this will create competitive AI services—faster than would government research programs.</p>
<p>For foreign policy, guidelines for action still need to be written. But some basic principles are already clear. The most important is that going it won’t work. Germany is too small to keep up with the competition on its own. Germany strategy has to be integrated into a European strategy. The most obvious partner is France, which is already much farther ahead in terms of developing and using AI.</p>
<p><strong>Competition for Top Talent</strong></p>
<p>Germany also has to figure out which sort of AI it wants to stand for. There’s a lot of space between China’s state-capitalism approach and the American data monopolies, and that’s where Germany can make its name. It’s not just about the ethics of using AI, but about a new, smart operating system for the data economy. How can markets be organized around this unusual resource? How can personal data be anonymized? Should people be paid for the data they create?</p>
<p>The answers will also have consequences for the labor supply. AI is about computational power and, of course, data, but without a critical mass of data experts, Germany will struggle to keep up with the rest of the world. Attracting and keeping these people is not just a question of salary, although that area can’t be neglected. Even OpenAI, a non-profit organization in Silicon Valley, pays its top scientists almost two million dollars a year. It’s more important that Germany is considered an attractive AI location. There are implications for foreign ministries, too: if you don’t attract or train employees with AI skills, you will become less important.</p>
<p>The foreign-policy impact of the internet, described early on by Hillary Clinton, took a long-time to become clear. But then the internet showed its global political impact with full force. It will probably be the same with artificial intelligence. Foreign-policy specialists should be prepared, if they don’t want to play catchup.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/everything-is-ai/">Everything is AI</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>&#8220;AI Can Change the Balance of Power&#8221;</title>
		<link>https://berlinpolicyjournal.com/ai-can-change-the-balance-of-power/</link>
				<pubDate>Thu, 28 Jun 2018 13:35:33 +0000</pubDate>
		<dc:creator><![CDATA[Katrin Suder]]></dc:creator>
				<category><![CDATA[Berlin Policy Journal]]></category>
		<category><![CDATA[July/August 2018]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Security Policy]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6912</guid>
				<description><![CDATA[<p>AI is on the verge of becoming a critical part of our societies, says former State Secretary of Defense Katrin Suder. A debate over ... </p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/ai-can-change-the-balance-of-power/">&#8220;AI Can Change the Balance of Power&#8221;</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>AI is on the verge of becoming a critical part of our societies, says former State Secretary of Defense Katrin Suder. A debate over the changing threats and their impact on security policy is long overdue.</strong></p>
<div id="attachment_6851" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online.jpg"><img aria-describedby="caption-attachment-6851" class="wp-image-6851 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Suder_online-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6851" class="wp-caption-text">© Bundeswehr</p></div>
<p><strong>How would you define artificial intelligence and why is it such an important topic for security?</strong> That’s a difficult question because we don’t even have a clear and broadly accepted definition for human intelligence. But I would say artificial intelligence is the attempt to recreate human intelligence—the ability to read, recognize patterns, answer questions, and so on—with machines. It’s an old dream in the history of mankind—think of golem in Jewish mythology, for example. In technical terms, AI means computer programs based on so-called deep learning algorithms. They mimic the structure of the brain in the form of neural networks which then are fed with large amounts of data. They are able to learn and adapt on their own…</p>
<p><strong>…in order to replace humans?</strong> In some tasks and functions, yes, but completely? No. The type of AI we have now is called “weak AI,” a tool that can carry out specific tasks—for example, anticipating when a specific machine component fails (predictive maintenance), or running the voice control function on your cell phone. You can teach a machine to play the game “Go,” but it’s a long way from being able to play chess.<br />
When you ask a machine a complex question, you might get “42” as a response—just like in the novel The Hitchhiker’s Guide to the Galaxy by Douglas Adams, when the computer is asked the “ultimate question of life, the universe, and everything.” Yet if someday the development of so-called strong AI succeeds and machines achieve abilities equal to or even superior to the intelligence of man, it would create a completely new reality that would affect all areas of life.<br />
We are witnessing various developments coming together. When we talk about AI, we are essentially talking about four components: the algorithms or programs, the computing power, data, and then the people steering it—programmers and app developers. Looking at the latest in algorithms and AI, there haven’t been any revolutionary developments. I did my PhD on neural networks in the late 90s; the mathematic models are far better today and the networks are more complex, but innovations in methodology alone do not indicate a quantum leap.</p>
<p><strong>So what would be a quantum leap?</strong> In addition to the development of strong AI that I already mentioned, quantum computing would be another non-linear leap. In terms of cryptology, quantum computers would change everything overnight. Take encryption that we’d currently need a million years to crack—a quantum computer could crack it in a millisecond. Everything will happen at unprecedented speed.</p>
<p><strong>And that would affect security policy as well?</strong> Yes, in a fundamental way! Image what would happen if all encryption is suddenly insecure. But back to AI: there is significantly more data now because we have sensors everywhere. Everything is connected—there are chips in our cell phones, our cars, our cameras, and soon even our clothes. At the same time, there is plenty of low-cost computing power to process these huge amounts of data.<br />
AI lives on data to learn and adapt. That is what an AI does – it processes and matches vast amounts of data, getting better and better at solving specific problems in the process. New applications emerge almost daily, including in the military sector with corresponding security policy implications. AI is a central component of the “digital battlefield” or, to put it in more dramatic terms, AI can be used as a weapon.</p>
<p><strong>And that brings us to the controversy over “killer robots,” as they’ve been called…</strong> It’s important to be clear here: what are killer robots? Ultimately we’re talking about autonomous weapons systems. And of course, the automation of individual weapon system functions is already happening today, from temperature regulation to flight stabilization. The Eurofighter jet has more than 80 built-in computers, and few people have a problem with that. What’s really at stake in this debate is the autonomous use of kinetic force against humans. And again it is important to be clear in the definition here. The air defense systems on naval ships, called Rolling Airframe Missile (RAM), also shoot automatically and adjust to their targets autonomously. But those targets are not humans—they are other missiles approaching at high speed, and RAMs are far superior to humans in their precise ability to respond. The majority doesn’t consider that problematic, either.<br />
The key question is whether kinetic violence against humans can be decided autonomously. The German government has clearly said no—there always needs to be a person is involved in such a decision. What other countries do is unfortunately not under our control. But Germany has ruled it out and is calling, rightly so, for more international regulation, as difficult as that may be. The rapid pace of technological development is constantly generating new questions and gray areas.</p>
<p><strong>What developments do you expect to see on the digital battlefield or with AI used as a weapon?</strong> There are more and more sensors on the battlefield, but also satellite images, internet data, mobile data, and so on. By digitizing, processing, and presenting all that data, one can gain a competitive advantage. Those who have better information, who manage to put all that information together, win. They have a perspective on who the attacker is, how the attacker is equipped, and so on.<br />
But conversely, the more interconnected or digital a system is—whether it’s the Eurofighter or the Puma armored car—the more vulnerable it is. Digitalization means everything is connected digitally, and the downside is the existence of cyber threats: everything can be hacked. That’s why cyber security—protecting against attacks on computers and programs—is so important. That brings us to the question of what role AI plays in cyberspace. AI can be used as a tool to fend off cyber attacks, and it can detect attack patterns. Whoever manages to develop the best AI will have an advantage in defending and attacking. As with any technology, it’s all about supremacy. We find ourselves in the middle of a global competition, particularly between the US and China. Beijing published its AI strategy about a year ago. It is a very ambitious plan that aims to make China a world leader by 2025.</p>
<p><strong>Google’s AlphaGo program beat Ke Jei, one of the world’s best players, at a game of Go in May 2017. That was considered a sort of wake-up call for the Chinese, wasn’t it, on par with the Sputnik shock of 1957?</strong> Yes, I think it was. There is a glut of data in China; people there appear to be more willing to relinquish their data. China has a different relationship to privacy and data protection. And highly-developed sensors and processors are everywhere, in cell phones, cameras, computers, etc. There are around 1.5 billion people in China, and many are very tech savvy—early adopters who take every new innovation on board. The West needs to reconsider its attitude towards China. The theory has been that the Chinese can only copy, not innovate. But that image needs an urgent overhaul. The focus in AI right now is on implementation, and China can do that in a big way. When the Chinese want to achieve something—well, just look at the Belt and Road initiative.</p>
<p><strong>Who is actually driving development in AI—is it governments, or is it multinationals like Google or Apple? T</strong>hat lies at the core of many AI debates, in particular the question of what Germany and Europe’s path should be compared to the US, where primarily companies drive innovation, or China, where the state steers developments. It is important to design how we want to deal with data, from regulating access to data for instance in the public sector to data science in schools. This needs to be done with transparency and with a balanced perspective on both the opportunities and risks.<br />
Besides the US and China, are there other leading AI countries? Russia’s President Vladimir Putin said recently that whoever leads on AI will rule the world… I’m afraid that’s true. I can’t adequately assess Russia’s skills. But it’s clear that we have a state actor that is very active in information and cyberspace.</p>
<p><strong>Can the development of AI be compared to the invention of the nuclear bomb?</strong> AI definitely has the potential to change the dynamics in cyberspace and the balance of power. This goes to the very core of security, especially because we have not yet been able to establish international regulations or controls. And there are other aspects that could further shape security policy and also need to be considered: AI is changing the economy as well. What happens when a country is economically superior or even has a monopoly because of AI? What are the implications for global value chains?</p>
<p><strong>Historically speaking, technological innovations often change all aspects of society. What is special about AI?</strong> That’s correct, every industrial revolution has also had an impact on security. But today, things are moving much faster. When the assembly line was created, for example, there was a clear impact on the defense sector as well – you could produce weapons much faster. Or when airplanes were invented, airspace took on a military dimension as well.<br />
But AI’s technological development has a far more immediate and broader impact globally. It’s as if you replaced your bow and arrow with a state-of-the-art fighter jet that doesn’t cost much and easily goes unnoticed. That is why AI worries me so much—especially because a terrorist group could hijack these technologies. The potential for abuse is enormous. Abusing AI costs nothing, and it isn’t immediately clear when someone develops or steals AI. You don’t see, hear, or smell anything, and you can’t see it on a satellite image.</p>
<p><strong>Are you talking about physical attacks, on infrastructure? Or psy-ops that influence public opinion?</strong> Everything. You have to look at the whole range. Policymakers in security have to be ready for all sorts of scenarios. I’m most concerned by the real, physical impact we’ll see when encryption or security systems are cracked. An opponent could derail trains or control medical devices or, as was saw in Ukraine’s energy grid, simply turn the lights off. The scenarios are endless and potentially devastating.</p>
<p><strong>Is the German government taking the problem seriously enough?</strong> Yes, it is. Look at what we saw happen in the Bundeswehr over the last parliamentary period. Cyber has been established as an independent military branch, with the build-up of a cyber command center; there were innovative experiments like the Cyber Innovation Hub and the cyber degree programs at the Bundeswehr University in Munich as well.</p>
<p><strong>Will that be enough?</strong> That’s hard to tell. But ultimately it’s just like developing a new European fighter jet. The Chinese and Americans are doing things on a completely different level. But does that mean we shouldn’t develop our own? No—we should.</p>
<p><strong>Do we in the West need to reconsider our privacy policies?</strong> I think we need to discuss how we deal with data and especially algorithms. The crucial question is: how do we make sure we know what the algorithms are doing? Who controls the algorithms? This requires a broad discussion, and it’s also a security issue. Take the example of early crisis detection—if an algorithm tells us: “There is 35 percent chance that a crisis will erupt in country in eight months’ time.” What do we do with that information?<br />
We ultimately need more social debates. At the moment there are often undifferentiated perspectives—sometimes ignorance or even flat refusal to deal with the issues at hand. But there is no way around digitalization. We have to talk about data and algorithms, about the future of work, and education. And how we want to live together, in a world full of AI.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/ai-can-change-the-balance-of-power/">&#8220;AI Can Change the Balance of Power&#8221;</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>A Question of Sovereignty</title>
		<link>https://berlinpolicyjournal.com/a-question-of-sovereignty/</link>
				<pubDate>Thu, 28 Jun 2018 13:28:19 +0000</pubDate>
		<dc:creator><![CDATA[Andreas Rinke]]></dc:creator>
				<category><![CDATA[Berlin Policy Journal]]></category>
		<category><![CDATA[July/August 2018]]></category>
		<category><![CDATA[Angela Merkel]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Germany]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6910</guid>
				<description><![CDATA[<p>Angela Merkel is deeply worried that Germany and the EU will fall far behind the US and China behind in developing AI. So far, ... </p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/a-question-of-sovereignty/">A Question of Sovereignty</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>Angela Merkel is deeply worried that Germany and the EU will fall far behind the US and China behind in developing AI. So far, however, her initiatives have failed to produce any significant results.</strong></p>
<div id="attachment_6860" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online.jpg"><img aria-describedby="caption-attachment-6860" class="wp-image-6860 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/04-2018_Rinke_Online-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6860" class="wp-caption-text">© REUTERS/Kevin Lamarque</p></div>
<p>Whenever Angela Merkel considers a political issue to be of pressing importance, she invites experts to the Federal Chancellery. The evening of May 29, 2018 was no exception: Merkel convened a group of 20 experts from business and science to discuss artificial intelligence. The chancellor has long been worried that Germany and the EU may miss out on this groundbreaking technology.</p>
<p>To also give her cabinet the benefit of the specialists’ urgent message, Merkel asked a number of ministers to attend, including her close confidant, economics minister Peter Altmaier, and labor minister Hubertus Heil of her coalition partner, the SPD. Germany is meant to have an AI strategy by the end of the year, so the cornerstones of a strategy have to be in place before the Bundestag’s summer break in July.</p>
<p>The chancellor has long been clear: Dominance in AI could entirely restructure the geopolitical order—or serve to cement American supremacy and Chinese power. There is resistance to this latter outcome, above all in Berlin and Paris. If there is one thing that French President Emmanuel Macron and Merkel are in absolute agreement about, it is the strategic importance of AI.</p>
<p>Since taking office, Macron has been talking of Europe’s need to defend its “strategic sovereignty”, adding that this will be a battle fought on many fronts, including technology. In a June 3 interview with the conservative <em>Frankfurter Allgemeine Sonntagszeitung</em>, Merkel emphasized that as early as 2017, she has been calling for Europe to take its destiny more into its own hands. Most have taken those statements as references to Europe’s security dependence on the United States. But she was also taking about technology.</p>
<p>It was in 2013, in the wake of the NSA spying scandal, Merkel first had to acknowledge how being dependent on American software and Chinese hardware could limit a country’s capacity to act. Five years later, during a visit to China, Merkel took a detour to the high-tech city Shenzhen, where she visited the Chinese startup iCarbonX and saw how far advanced China is in the usage of big data for the health sector. It was a painful reminder for a chancellor who, for all her time in office, has yet to witness a successful introduction of an electronic health card in Germany.</p>
<p>As always after a trip to China, Merkel returned to Germany impressed and worried by the pace of change. While she is quick to stress that certain aspects of communist China—be it the desire to control citizens or the lack of data protection—could never be replicated in in a democracy, that doesn’t change the fact that German companies have to compete on the same field as American and Chinese tech giants.</p>
<p><strong>Ever Louder Warnings</strong></p>
<p>Warnings about the explosive potential of AI development have been getting louder and louder. Military officials believe that AI is already in the process of revolutionizing warfare. The US and China have been long been experimenting with autonomous weapon systems, drones, and ship fleets that can organize themselves. While Germany was debating whether its conventional weapons systems are even operational, or if there are enough boots and protective vests for its soldiers, other places were developing the technology that will decide future wars. The technological gap is widening even more rapidly—and in the outdated German security debate, nobody seems to have noticed.</p>
<p>Merkel on numerous occasions, even to her coalition partners, has warned of the need for increased spending on German defense. But she has so far avoided a debate about the possibilities and dangers of these new types of weapons. In the meantime, Germany’s Foreign Office has begun to consider how AI will change foreign policy. An international debate over the ethics of using automated and autonomous weapons has begun.</p>
<p>Merkel has, to this point, limited her attention to the shortcomings of digitalization in the civil sector. She has been pushing the issue since 2012. Her initial efforts attracted ridicule after she described the unpredictable consequences of the internet for society as <em>Neuland</em>—terra incognita. Then she threw her weight behind Industry 4.0, a campaign meant to push German companies to understand and implement the digitalization of production, administration, research, and sales. Merkel warned that, unless Germans master Big Data and the specialized manufacturing technology of digital-age products, proud German industry could simply become the work bench for American IT firms.</p>
<p>In 2016, the term “artificial intelligence” finally appeared in the chancellor’s vocabulary when Merkel, a trained physicist, realized during conversations with entrepreneurs and scientists that artificial intelligence would massively accelerate the fusion of otherwise separate research domains. “I’ll put it bluntly: It is not entirely clear to me in which fields we are actually top-notch; in which fields we need to buy more knowledge; what the interconnection of diverse fields will look like one day; and if we will, by then, have all of the technological capabilities that we need to be at the front of the pack,” she acknowledged at a research summit on June 12, 2016. Afterwards, Merkel called even more urgently for Germany to go on the offensive in AI research. If necessary, Germany should protect its AI companies from being taken over by American or Chinese firms.</p>
<p><strong>Three Competing Ministries</strong></p>
<p>One reason Merkel was turning up the heat was her improved understanding of the complex effects of AI, which she now describes as “disruptive” or “revolutionary.” Another was the realization, at the end of the 2017 legislative period, that the grand coalition’s digitalization efforts have had little success. For example, though national broadband expansion was ranked among one of the government’s top priorities in 2013, Germany’s position in the international rankings for data connectivity has continued to worsen.</p>
<p>The chancellor credits herself for the steady expansion of Germany’s research budget. But research expenditure was not tax deductible—an issue especially important small and medium-sized businesses—until 2017. Venture capital has also been slow to arrive. In the last election campaign, politicians of all stripes complained that having three ministries, run by three different parties, responsible for digitalization didn’t exactly speed things up. On top of that, critics said that the government had made too many special considerations for Deutsche Telekom. By allowing Telekom to increase network speeds by “vectoring” preexisting copper cables, they cut costs in the short run, but only delayed the expensive and inevitable transition to fiber-optic cables.</p>
<p>In response, Merkel insisted during the campaign that the chancellery should centrally manage all digitalization activities in the future. In her new grand coalition, she has two senior figures dealing with the issue: Helge Braun, head of the federal chancellery, and Dorothee Bär, federal government commissioner for digitalization. Additionally, she created a new department responsible for digital issues in the central government office and integrated it with other departments from the interior ministry. In short, Merkel wants to speed things up. Her May 2018 meeting with AI experts only reinforced the point that Germany was lagging behind and needed to take decisive action. One element is to make it clear to small and medium-sized companies that if they want to survive, they have to engage with AI technologies.</p>
<p>In the battle to win back some technological sovereignty, lots of levers need to be pulled at the same time. That is not easy given Germany’s federal structures and the distance between politics and the economy. Unlike Beijing, Berlin cannot “rule from the top.” And unlike in the US, there are not huge sums of private money ready to be spent on scaling up start-ups or accelerating IT research. In Germany, the federal government has to ask the states to reform their educational systems in order to meet new technological challenges. And when the big companies in China or the US come calling, company managers naturally think about their own interests or the firm’s interests—not necessarily about Europe’s strategic needs.</p>
<p><strong>The Complete Value Chain</strong></p>
<p>In the tech age, a region only has technological sovereignty if it can produce the complete value chain of digital products. Whether it be computer chips, computers, batteries, or software, the European countries have collectively given up on competing at the top. Instead, they have become customers for other nations’ companies. The US and China dominate the market for software, hardware, and social media platforms, which have an ever-greater impact on daily life. And when there are interesting technological developments from German startups or companies, the large American and Chinese firms eagerly buy them out.</p>
<p>These problems are why the 2016 Chinese acquisition of the German robotics company Kuka generated such a passionate debate in the government about foreign investors. Should foreign takeovers of strategically important companies be more strongly controlled? “We need to exercise caution, so we can maintain a foundation, so that not everything will be bought out from us,” warned Merkel in May 2017. Merkel also called for more flexible European aid rules that would allow member-states to better support AI firms.</p>
<p>Yet the attempts to redress Europe’s deficiencies seem modest given the speed of innovation elsewhere. For years, Merkel has worked in the background with some similarly minded European leaders to try and develop an independent European computer chip factory, or perhaps a European battery factory. But the fragmentation of the EU internal market, national reservations, and the lack of strategic direction have hindered progress. Only in 2018 did the European Union implement its General Data Protection Regulation, which provides a common legal framework for handling data in the EU. Everything is too slow.</p>
<p><strong>A Three Percent Benchmark</strong></p>
<p>Merkel and Macron want to kindle a new research dynamic—or at least expand on current basic research and its applications. For the chancellor and the French president, the possibilities of AI are so revolutionary that the research needs to be revolutionary too. The model is the Pentagon’s DARPA program for defense research which has regularly boosted civilian R&amp;D. But accepting that research projects will sometimes fail conflicts with the German approach of accounting for every penny spent.</p>
<p>The leaders of France and Germany argue that such a cautious, rigid approach makes it impossible to discover the necessary “game changers” or “technological leaps” that could secure the survival of European industry. They believe that the EU should use the European Innovation Council to support high-risk research projects—even if nine out of every ten projects will ultimately fail.</p>
<p>It is not clear whether the drive to catch up will work. China and the American tech giants have been following forward-looking strategies for years, investing tens of billions in new technologies. In an aging Europe, on the other hand, the debate focuses almost exclusively on the distribution of social benefits and fears of migration. The EU will still have around 450 million citizens after Brexit, but no functional European digital market to serve them.</p>
<p>On top of that, there is the general lack of awareness about the importance of innovation. Much of the EU is absorbed by passionate debates about NATO members’ commitment to spend two percent of GDP on defense. Yet nobody seems to be noticed that almost all EU member states are seriously neglecting another, more important commitment that dates back to 2010: they should all be spending three percent of their economic performance on research and innovation.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/a-question-of-sovereignty/">A Question of Sovereignty</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Made in Europe</title>
		<link>https://berlinpolicyjournal.com/made-in-europe/</link>
				<pubDate>Thu, 28 Jun 2018 13:22:29 +0000</pubDate>
		<dc:creator><![CDATA[Cécile Boutelet]]></dc:creator>
				<category><![CDATA[Berlin Policy Journal]]></category>
		<category><![CDATA[July/August 2018]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[France]]></category>
		<category><![CDATA[Germany]]></category>
		<category><![CDATA[The EU]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6907</guid>
				<description><![CDATA[<p>Europe has the capacity to become a global AI leader, and its data protection may even prove to be an advantage. But more support ... </p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/made-in-europe/">Made in Europe</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>Europe has the capacity to become a global AI leader, and its data protection may even prove to be an advantage. But more support for startups and medium-sized businesses is needed.</strong></p>
<div id="attachment_6908" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1.jpg"><img aria-describedby="caption-attachment-6908" class="wp-image-6908 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Boutelet_online-1-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6908" class="wp-caption-text">© REUTERS/Benoit Tessier</p></div>
<p>What is the best way to a collective European strategy for artificial intelligence (AI)? This past March, Cédric Villani, a member of the French parliament, established an important landmark to guide development. He wrote a parliamentary report titled “For a Meaningful Artificial Intelligence,” that has sparked passionate discussions across France.</p>
<p>Villani is a prominent public figure—a renowned mathematician who received the Fields Medal, one of the highest distinctions awarded to mathematicians, in 2010. In France’s political landscape, scholars like Villani are a rarity. His distinctive style of clothing and enthusiasm for technology have turned him into a public celebrity. “Artificial intelligence must be a topic of discussion for the general public, otherwise we lose our foundation,” Villani is often quoted as saying.</p>
<p>His AI strategy for France and Europe has three key features. First, it must be a joint European endeavor because only a collaborative Europe has the necessary diversity and scale for an ideal AI development. Second, the strategy must be cross-cutting, which means that experts from multiple disciplines, private companies, and startups should all contribute to AI development and research. Moreover, simplified administrative procedures should enable and guide this collaboration. Third, Villani explains that the AI strategy must be inclusive. Inclusivity means not only expanding discussion to include as many people as possible, but also protecting citizens and investing in education. Villani recommends that this European strategy should focus on four key sectors: transport, healthcare, defense, and environment.</p>
<p>The Franco-German collaboration on artificial intelligence must be prioritized, emphasizes Villani, who in his research over the past months has traveled several times to Germany. Both countries have no shortage of top mathematicians and computer scientists, he said during a talk hosted at the French Embassy in Berlin in early March. “Everybody admires the strength of the German industry and also the facility of German research and industry to collaborate. These things are very key in the development of AI.”</p>
<p><strong>Competing for Talent</strong></p>
<p>What Villani did not say is that German industry is still very much dominated by traditional sectors, like cars, chemical sciences, and industrial machinery. And it is precisely the success of these industries that has contributed to lagging development in digitalization, particularly for small- and medium-sized companies.</p>
<p>Industry in France is not as dominant as in Germany, yet France has succeeded in initiating a dynamic modernization process. French President Emmanuel Macron has managed to push through numerous reforms, and there is additional pressure on companies to invest in France. France’s overall strategy of providing public support for startups is beginning to produce results: the country is slowly gaining a reputation for innovation. At the 2018 Consumer Electronics Show in Las Vegas, the most important convention for technological innovation, France represented the largest international contingent with 270 startups.</p>
<p>In the development of AI, the relative strengths of the French and German economies could excellently complement one another. However, both countries suffer from a similar brain drain of artificial intelligence experts. “The competition for talented and skilled workers is going to be one of the largest European challenges,” warned German Chancellor Angela Merkel at the end of May during a visit to Porto. One example of this challenge is the French AI expert Yann LeCun, who left his country to become Chief AI Scientist for Facebook AI Research (FAIR). Not all specialists leave their native country: Antoine Bordes leads Facebook’s AI Research Center in Paris. Google also wants to build an AI laboratory in the French capital. In order to combat this brain drain threat, Villani suggests providing higher salaries and better working conditions for AI researchers at public institutions in Europe.</p>
<p><strong>A Franco-German Research Network</strong></p>
<p>It took Chancellor Merkel a very long time to put together a government coalition, and this summer is seeing new challenges to her authority. But for now, Chancellor Merkel can finally work more closely with Paris. Talks are already underway on an effort to establish a joint Franco-German center for AI, as proposed in the coalition government agreement. Such a center could make use of German institutions already established near the border with France which could be expanded to provide a joint research network.</p>
<p>There is a German Research Center for Artificial Intelligence in Saarbrücken, which was founded in 1998 as a public-private partnership. The Saarbrücken AI Center is the best funded AI research center in the world, receiving resources from both the German federal budget as well as from major corporations like BMW, Volkswagen, Airbus, Bosch, SAP, German Telekom, Google, Intel, and Microsoft. Additionally, in 2016, a research network called “Cyber Valley” was founded in Baden-Württemberg, not far from the French border. It is supported principally by the Max Planck Institute for Intelligent Systems, which draws resources from research centers in Tübingen and Stuttgart. Cyber Valley receives national funds, but it also works with groups like Bosch, Daimler, Porsche, IAV (Engineering Society for Automobiles and Traffic), ZF Friedrichshafen, and the internet giant Amazon. Its research focus is primarily on self-driving vehicles—an important challenge for Germany, considering its large dependence on the automobile industry.</p>
<p>Aside from the location of the planned Franco-German Research Center, there is much more pressing question which needs to be discussed: Is this center really the best answer to the challenges and opportunities presented by AI? France and Germany already have outstanding research facilities, but they haven’t benefited much in terms of marketable innovations. This is precisely why some experts recommend that to create an effective European AI ecosystem, Europe needs to pursue a broader approach by combining funding for AI research with increased support for startups and middle-sized companies. This is exactly what Villani champions when he speaks of the necessity for “agile and widely usable” research—a research strategy characterized by more proximity to the market.</p>
<p>A report from the consulting firm Roland Berger titled “Artificial Intelligence—A strategy for European startups” follows a very similar line of thought. “Rather than coming from multinational firms as in the past, innovation now stems largely from research laboratories, digital platforms, and startups. These are the players creating algorithms and developing use cases, they are the brains behind innovations in image recognition, natural language processing and automated driving,” said the report’s authors.</p>
<p>Almost 40 percent of all startups active in AI are located in the US. In order to compete, the consulting firm calls for creating a European legal status for startups as well as increased incentives for financing European AI-startups, which have less access to capital than their American and Chinese competitors. The Berlin-based Stiftung Neue Verantwortung, a think tank focused on the interaction between technology and society, also recommends developing structures to help medium-sized companies integrate AI solutions into their business models.</p>
<p><strong>Data Protection as a Plus</strong></p>
<p>The acceptance and integration of AI technology in the general public and in middle-sized companies will be the key challenge in Europe’s development of artificial intelligence. Whether we look at nuclear energy, genetically modified food, or at the Volkswagen emissions scandal and uproar over diesel engines: no technology can prevail if it is rejected by a significant portion of the population. AI is not only fascinating for many people—it is frightening as well. Many are worried by the ethical problems of the emergent technology and scared by scenarios of enormous job losses caused by AI.</p>
<p>Meanwhile, medium-sized companies are concerned that technological innovations from AI, particularly with regards to computing power, might leave them behind or replace them entirely. These concerns also require a European solution that could be based, in part, on the EU General Data Protection Regulation (GDPR), which went into effect on May 25. The GDPR enables Europe to distinguish itself from its competitors as a region in which technology respects the boundaries of personal privacy.</p>
<p>The ethical dimension of EU data protection contributes to the attractiveness and competitiveness of Europe—just look at the US where numerous AI researchers have expressed their discomfort with the handling and processing of personal data. It is a distinctly European conviction that data privacy can in most cases be protected without sacrificing results by using anonymized data. This regulation is reassuring for citizens, but also for companies seeking to use data without compromising on quality or their protection of clients and partners. As a result of this added security and protection, businesses increase their ability to become integral parts of the AI ecosystem.</p>
<p>Without data, even the best algorithms are useless—and only the companies that know how to collect, analyze, and protect data in the long-run can reap the benefits of AI. Europe as a whole needs to strongly defend its values of human security and privacy. Only then can they serve as the necessary foundation for a competitive artificial intelligence that is “Made in Europe.”</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/made-in-europe/">Made in Europe</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>AI for Xi</title>
		<link>https://berlinpolicyjournal.com/ai-for-xi/</link>
				<pubDate>Thu, 28 Jun 2018 12:27:19 +0000</pubDate>
		<dc:creator><![CDATA[Finn Mayer-Kuckuk]]></dc:creator>
				<category><![CDATA[Berlin Policy Journal]]></category>
		<category><![CDATA[July/August 2018]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[Xi Jinping]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6903</guid>
				<description><![CDATA[<p>China aims to become AI leader and a “technical-economic great power”. It‘s devoting huge resources to that goal. Preparing for the warfare of the ... </p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/ai-for-xi/">AI for Xi</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>China aims to become AI leader and a “technical-economic great power”. It‘s devoting huge resources to that goal. Preparing for the warfare of the future is part of the strategy.</strong></p>
<div id="attachment_6848" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online.jpg"><img aria-describedby="caption-attachment-6848" class="wp-image-6848 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/06/Mayer-Kuckuk_bear_online-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6848" class="wp-caption-text">© REUTERS/Ted S. Warren/Pool</p></div>
<p>In Beijing’s Zhongguancun district, a non-descript skyscraper rises between a subway station and an electronics store; the name on the facade: Sea Dragon Buildings. The windows on the ground floor are boarded up and the businesses have long since been shuttered. Yet a few floors above, some of China’s most sophisticated start-ups are developing cutting-edge technology. One of them is Horizon Robotics, a company that is just three years old, but already a globally known name in artificial intelligence.</p>
<p>“Our mission is to create one of the leading platforms for AI worldwide,” said the company’s founder Yu Kai. He was head of the Institute for Neural Networks for Chinese IT conglomerate Baidu before breaking off to start Horizon Robotics. The company builds chips and software that resemble those neural networks and drive AI. Their technology can recognize and interpret patterns and situations—invaluable for companies working in the auto, aviation, and robotics sectors.</p>
<p>Horizon is a paragon of China’s booming AI scene. The country is well on its way to dominating what has become a crucial 21st century technology. The State’s Council announced at the end of last year that China is aiming to develop world-class AI technology by 2020: “The use of AI should underpin our aspiration to be on equal footing with other innovation leaders,” wrote China’s top state planners in a July 2017 directive. “This is the new focus of international competition and the strategic technology of the future.”</p>
<p>In other words, China is not only endeavoring to be a central production site for the next generation. It wants to even the playing field with its global rival, the United States. After all, the country that dominates AI will, according to experts, also gain a key military and geopolitical edge. And China has left no doubt about its ambition: to be a “technical-economic great power,” according to the directive.</p>
<p>It is no wonder, then, that AI has top priority. In an address last year, President Xi Jinping called the technology one of the pillars of his economic policy. “It is important to promote the deep integration of the internet, big data, artificial intelligence, and the real economy,” he said in his 2017 party speech. And if China sets itself an objective, things start happening. Billions have been invested in innovation and start-ups, and the country’s provinces are competing to have AI companies settle in their regions. In its 2017 Global Artificial Intelligence Study, consulting firm Price WaterhouseCoopers dubbed the growing competition between China and the US a “global AI arms race,” where the great powers will compete—and where the “war over research, investment, and capable minds” will eclipse any trade dispute.</p>
<p><strong>AI Starts Early</strong></p>
<p>The Chinese planning apparatus is indeed pursuing a long-term approach and investing heavily in training. AI has been a nationwide subject in computer science education for a few months now; the first lessons begin in primary school. Initially, 40 select schools are offering courses for secondary school classes—at a level that elsewhere in the world would be considered appropriate for universities. The Ministry of Education started issuing its own textbook on AI in April, according to the Hong Kong-based daily South China Morning Post. Colleges and universities, meanwhile, encourage students to start their own companies.</p>
<p>China has already acquired considerable knowledge and success in AI. The Beijing City Council counts some 400 AI companies in the vicinity of the Horizon Robotics office alone. And according to a study by the Japanese engineering company Astamuse, China is second in the world in patent applications, only behind the US. But they are catching up fast. Also, AI is already in regular use in China for a wide range of applications. Facial recognition, for example, is widely employed. In some public bathrooms, a machine will only dispense toilet paper to people who first look into a camera—and they only receive four pieces of paper. If the same person tries to return for more, the machine refuses. The idea is to stop people from stealing toilet paper (a rampant problem authorities are trying to stamp out). Then there are university entrance exams: schools are required to register students with a biometric photo to prevent them from sending a substitute to sit for a test.</p>
<p>The biggest client, however, is the police. One of their contractors is Megvii, or Mega Vision, also located in Beijing’s Zhongguancun engineering district. The company’s facial recognition software is based on neural networks and can pick out and positively identify people from blurred images and in huge crowds. Beijing police are now able to catch suspects who make the mistake of walking down the street in a camera’s line of sight. Some 400 million cameras will be installed in public spaces across the country; soon, culprits will have no chance of moving around unnoticed.</p>
<p>In this way, an authoritarian state is gaining a sizeable technical advantage over the West. It is a paradox to many observers who believe democracy goes hand in hand with technological supremacy. “The reason for China’s success in AI and data mining, however, is precisely the lack of data protection,” says Dong Tao, a China economist at Credit Suisse.</p>
<p>The Chinese communication app Wechat, for example, processes seven billion photos per day that the government and AI researchers can access. As the Davos World Economic Forum pointed out, the explosion of patent applications in China is thanks in part to having the world’s largest digital user base.</p>
<p><strong>Alibaba vs. Amazon</strong></p>
<p>Alibaba is another example of how China appears to be gaining an edge. Like Amazon, Alibaba is a platform for selling items online. The company uses adaptive learning methods to improve its suggestions for the customer’s next purchase, pointing them to its shopping sites like Taobao. With its algorithms, Alibaba can align and adjust its own forecasts to match actual customer behavior; the quality of Alibaba’s suggestions, therefore, are better than Amazon—according to their own statement, at least.</p>
<p>Horizon Robotics, meanwhile, is gearing up for the use of its AI chips in self-driving cars. For Beijing, it’s crucial to have key technologies in Chinese hands. The state does not directly fund Horizon Robotics, but it does so indirectly: if you want to sell high-tech products in the Chinese market, you have to demonstrate a minimum added value from a Chinese company. That’s why Audi is interested in using Horizon’s technology for developing self-driving cars in China. In other markets, however, the carmaker has turned to competitors, like US company Nvidia.</p>
<p>More and more, the Chinese appear to be surpassing the Americans in technology. Horizon’s cameras do not merely capture a pixel pattern, like traditional devices: They understand what they see and assign parts of the picture to a corresponding meaning. A cyclist is recognized and assigned a code; so is a building, a pedestrian crossing, or a mother with a stroller. And the chip even provides predictions on what might happen in the next few seconds. A yellow traffic light, for example, will turn red (it was just green); a cyclist will be one meter to the left (riding from the right); the mother with the stroller will likely stop (the pedestrian light is flashing red). The chips then feed that data to the central computer board, which uses the information to decide on the car’s next move.</p>
<p><strong>AI on the Battlefield</strong></p>
<p>Horizon doesn’t talk much about future uses of its technology, but it’s clear the possibilities are endless. Take aviation, where improved autopilot systems and autonomous aircrafts are already in use. That brings us to the military. In sealed-off research facilities, the People’s Liberation Army is furiously at work on mapping the future of warfare and its consequences.</p>
<p>In an essay that has since been taken off the internet, Officer Chen Hanghui from the Army College in Nanjing debated how artificial intelligence could “change the rules of warfare.” He came to the conclusion that technological singularity on the battlefield is imminent. Technological singularity is the theory that radical and rapid developments in AI will means that machines will overtake humans. Thinking systems can learn, adapt, and reprogram themselves, creating super-intelligence.</p>
<p>China’s military is already looking ahead to the time when traditional armies will not be able to compete with AI-driven, automated armies. The country’s air force considers the introduction of highly intelligent systems to its fleet an utmost priority.</p>
<p>“In the future, mobility of information will be a decisive factor in aerial combat, electromagnetic attacks, or cyber operations,” Yang Wei, Vice President of the Commission for Science and Technology at the state-owned Aviation Industry Corporation of China (AVIC), told the official state newspaper in July 2017. That opens an opportunity for China to “overtake the West,” he added.</p>
<p>Most Chinese defense experts have remained sober in their assessment of AI on the battlefield, describing instead the practical applications the technology can provide. Their attitude is mostly defensive: it’s about ensuring that China has the capability to defend itself should the need arise.</p>
<p>And for good reason. Militaries around the world are weighing up the same issues. The combat machine of the future is unassailable: It no longer has a human form; it distinguishes between friend and foe in a matter of milliseconds, and it reacts without hesitation, with minimal doubt. Experts have long since been raising the alarm bells over the dangers of such fighting machines, but as China’s military websites repeatedly point out, anyone who wants to survive in international competition needs that technology. If AI determines who rules the world, as Vladimir Putin recently noted, China is ready for the challenge.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/ai-for-xi/">AI for Xi</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>“We&#8217;re Building Technologies That Are, by Their Very Nature, Dual Use”</title>
		<link>https://berlinpolicyjournal.com/were-building-technologies-that-are-by-their-very-nature-dual-use/</link>
				<pubDate>Tue, 03 Apr 2018 12:04:01 +0000</pubDate>
		<dc:creator><![CDATA[Toby Walsh]]></dc:creator>
				<category><![CDATA[Berlin Observer]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Warfare]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6426</guid>
				<description><![CDATA[<p>Toby Walsh discusses some of the risks artificial intelligence entails—including the possibility of an AI “arms race” —and what steps can be taken to mitigate them.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/were-building-technologies-that-are-by-their-very-nature-dual-use/">“We&#8217;re Building Technologies That Are, by Their Very Nature, Dual Use”</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>Toby Walsh, a professor of artificial intelligence at the University of New South Wales, discusses some of the risks artificial intelligence entails—including the possibility of an AI “arms race” —and what steps can be taken to mitigate them.</strong></p>
<div id="attachment_6424" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT.jpg"><img aria-describedby="caption-attachment-6424" class="wp-image-6424 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/04/BPJO_Walsh_AI_Warfare_CUT-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6424" class="wp-caption-text">© REUTERS/Omar Sobhani</p></div>
<p><strong>You’ve written quite a bit about the potential for an AI arms race. Are you more worried about AI falling into the hands of bad actors, or possible unintended consequences regardless of who “owns” the technology?</strong> Those are both dangers we should be worried about. There might be unexpected consequences, we might have flash crashes, we may have flash wars. Our systems, although well designed, could interact in strange ways with opposition systems, and that could result in a nasty feedback loop we didn’t intend, so we end up fighting a war when we didn’t want to.<br />
So that’s one risk. Another is, as you say, the chance that they will fall into the hands of the wrong people, and will then be used against us. Many militaries are waking up to that possibility. The chief of the Australian army just went on record saying exactly that, that we should be very worried that these weapons could be used against us by our enemies. And even if we design them well and put ethical safeguards in, other parties will be quite happy to take those ethical safeguards off.</p>
<p><strong>Many of the scientists associated with the Manhattan Project, which built the first atomic bomb, later regretted their work—they said that had they known what their research would create, they wouldn’t have participated. With AI, will there be a clear point when we realize we&#8217;re building something dangerous and have time to reconsider? Or will we make that leap without much warning? </strong>I think it’s great to look back at history, at instances like the Manhattan Project, and see if we can actually learn how we can go about managing technological change. The Manhattan Project is similar but different—I think many of the scientists with the Manhattan Project initially were motivated by a very worthy cause, which was the war going on in Europe and the horrors associated with that war. Of course, by the time it finished, the war in Europe was over, so I think their motivation changed. And they did call for the Japanese to be shown a demonstration of the bomb on an uninhabited island and told “We have more of these,” but the generals wanted to see the destructive impact they could have on a real city.<br />
Interestingly—and again, to think of the historical precedent—there was a petition put out among the scientists. Of course, this was done in secret, as the project was a secret, and it was only discovered after the event. Obviously, the military didn&#8217;t listen to the scientists and decided to go ahead and drop the bomb. It&#8217;s possible that saved lives, but the counter-history is always difficult to discuss.<br />
But there is a fundamental difference. When they were building the first nuclear bomb, they were trying to build an explosive device—the mother of all explosive devices. Here we&#8217;re building technologies that are, by their very nature, incredibly dual use. Many of us are working on building them for good ends, improving people’s lives and productivity, making us healthier, wealthier, and happier. But the same technology can be used for military ends. They could be used for good military ends to save people&#8217;s lives, and ends that I would consider less desirable, in terms of changing the nature of warfare and making it easier and faster to kill people.<br />
So we want the technology, it will have immense benefits – but the same technology that will identify, track, and target an autonomous drone will be used to identify, track, and avoid pedestrians. You&#8217;d change one line of code.<br />
That same technology is going to save lives—millions of people die in road traffic accidents every year, 30,000 people in the United States. So it&#8217;s going to be of immense benefit to society and safety and mobility. There are immense benefits that come with having autonomous vehicles. So we will go down that way, it&#8217;s too desirable not to develop. But the same technology can be repositioned, we can&#8217;t avoid that fact.<br />
But we&#8217;ve seen this in many other settings. Chemical weapons are a good example: We didn&#8217;t ban chemistry, we banned chemical weapons. But the same chemistry that is used to make fertilizers is used to make explosives.</p>
<p><strong>You signed on to the Asilomar Principles, a set of principles meant to represent a first draft of the kind of ethical regulations that might be used to govern AI development.</strong> Yes, I was at Asilomar for the conference that happened in January 2017 to discuss the responsible, ethical development of AI. That was a very interesting meeting. A lot of my colleagues were there, and the location of the conference was chosen for symbolic reasons—that&#8217;s where they had a previous meeting about gene editing and genetic manipulation, which resulted in a voluntary embargo on these dangerous technologies being developed by the scientists. It was an interesting meeting, and there was a lot of consensus about the responsible development of AI.<br />
Since then there have been a number of other initiatives, like the IEEE initiative on ethics and the partnership on AI that was featured in the conference. So a number of bodies and institutions have come together to provide fora to discuss principles and practice concerning the ethical and responsible development for AI, which is something I think we should be worrying about.<br />
In the past, we hadn&#8217;t worried about this. But in the past we didn&#8217;t have much of an impact on the world, so it didn&#8217;t really matter. But now, AI is becoming part of our lives—it will become like electricity, it will be woven into the fabric of everything, making decisions that impact people&#8217;s lives, whether they get given credit, let out of jail, many things that are really troubling. We need to think carefully about where and when we add AI to our lives. There&#8217;s a very nice article, a transcript of a speech given by the humanist Neil Postman, “Five Lessons for Technological Change.” He says that technology is a strange intruder into our lives, and we should only let it into those parts of our lives where it&#8217;s actually going to do us good. People think of it in a sort of mythical sense, as though it&#8217;s in some sense inevitable. It’s going to happen and we&#8217;re just going to have to adapt to it. But we actually get to make choices as to where the technology is let into our lives and how.</p>
<p><strong>One of the principles agreed upon was that researchers and policymakers would work hand-in-hand to guide AI&#8217;s development. Do you think that’s happening enough?</strong> I should say that the principles are a work in progress. I have some reservations about some of them. Some of them are things where I think they&#8217;re not specific to AI, they&#8217;re things you&#8217;d say about any technological change that you were trying to develop. And perhaps we shouldn&#8217;t have specific principles for AI—perhaps they should fit into the general ethical frameworks we have for technological change, full stop.<br />
But AI does introduce some interesting new things, like autonomy. We don&#8217;t have any autonomous systems in our world yet, and we will be building autonomous systems and we have to ask these questions. It&#8217;s been asked here today, for example, should robots be given rights? Because they&#8217;re autonomous, they&#8217;re part of our universe, and we may have to worry about responsibilities and rights and things like that. So they do introduce things that require us to make fresh ethical principles.<br />
But the short answer is no, that&#8217;s for sure. I frequently get asked to talk to people involved in politics, and most of the time they&#8217;re trying to understand the technology themselves. There&#8217;s a language and science gap there to be filled.<br />
The good thing is that people are waking up to this. They will find that there are many people in the academic and AI communities quite willing to spend time coming to fora like this and talking to politicians in strange places for scientists like the United Nations. As scientists we have a real responsibility—these are technologies that will impact everyone&#8217;s lives—so we have a real responsibility to ensure there&#8217;s an informed conversation, and that decisions aren&#8217;t made as they generally are now, by the developers and the technologists.</p>
<p><strong>The last principle said recursively self-improving AI should be strictly controlled with strong safety guarantees. This was obviously a topic that worried the now-late Stephen Hawking. Do you think we&#8217;re too alarmed about this? Not alarmed enough? Appropriately alarmed?</strong> I think we&#8217;re too alarmed about this. We don&#8217;t know how to build recursively improving AI. We’ve never built any recursively self-improving system, despite all the amazing things we&#8217;ve managed to build in the universe. In fact, I recently wrote an article titled “Why the Singularity May Never Be Near,” which lists a dozen or so arguments – technical reasons mostly—as to why we may never end up building recursively improving AI.<br />
That doesn&#8217;t mean we won&#8217;t actually build machines that are as intelligent as us, ultimately even much more intelligent. I just suspect we&#8217;ll do that the old fashioned way, which is through our own intelligence and perseverance and sweat. We won&#8217;t just design machines that magically improve themselves. A good analogy is ourselves: Despite the fact that we have a good understanding of learning and the brain, we haven&#8217;t changed the way we learn. It&#8217;s still just as painful. We haven&#8217;t improved our learning, we&#8217;re not recursively self-improving.</p>
<p>— interview conducted by Josh Raisher</p>
<p>The BERLIN POLICY JOURNAL was media partner of Aspen Germany’s “<a href="https://berlinpolicyjournal.com/the-ai-revolution/">Humanity Disrupted</a>” conference, held in Berlin.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/were-building-technologies-that-are-by-their-very-nature-dual-use/">“We&#8217;re Building Technologies That Are, by Their Very Nature, Dual Use”</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>&#8220;Some Jobs Might Be Done Entirely by AI&#8221;</title>
		<link>https://berlinpolicyjournal.com/some-jobs-might-be-done-entirely-by-ai/</link>
				<pubDate>Tue, 20 Mar 2018 15:23:53 +0000</pubDate>
		<dc:creator><![CDATA[Dileep George]]></dc:creator>
				<category><![CDATA[Berlin Observer]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6413</guid>
				<description><![CDATA[<p>An interview on the future of work and how AI will reshape our societies.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/some-jobs-might-be-done-entirely-by-ai/">&#8220;Some Jobs Might Be Done Entirely by AI&#8221;</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>Dileep George got his PhD in electrical engineering from Stanford University before founding Vicarious AI, a California-based company working to develop artificial “general” intelligence for robots. The BERLIN POLICY JOURNAL talked to him on the sidelines of the Aspen Institute’s “Humanity Disrupted” conference about the future of work and how AI will reshape our societies. </strong></p>
<div id="attachment_6414" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT.jpg"><img aria-describedby="caption-attachment-6414" class="wp-image-6414 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/BPJO_DileeP_AI_Interview_CUT-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6414" class="wp-caption-text">© REUTERS/Steve Marcus</p></div>
<p><strong>One of the big applications you&#8217;re focusing on at Vicarious is the assembly line. What are the responsibilities of AI developers in that sense, looking at the future of work?</strong> This is not something one company can solve, this is a political and social problem. If you look at the future, we can see that many of the jobs people do today are going to be automated. And this is not just happening at assembly lines—driving is one example. Autonomous cars will displace drivers. It’s happening in document discovery in legal work, and document summary. These are all things that can be automated, and it’s already starting. Initially it will be in coordination with humans, but eventually some of those jobs might be done entirely by AI. At a societal level, we have to make sure that the workers that are displaced are retrained for new jobs, or even given something like a basic income so that they can find new avenues of work.</p>
<p><strong>Is it too late? One lead robotics company here said that the coders of today will be the blue collar workers in thirty years, because we&#8217;re moving too slowly in terms of changing our education systems.</strong> No, I don’t think it is too late. This will happen over the next decade. But we do have to start discussing the ideas and making policy initiatives, and probably start doing some controlled experiments with these ideas to see which of these ideas work.</p>
<p><strong>Some of these advances touch on many sectors at once—defense, economics, health. So where do you see regulation coming from? National departments of defense? Finance ministries?</strong> This is not about regulating AI per se. I’m not sure how to regulate AI—I don’t think you can regulate the development of AI, in fact, because it is hard to imagine a framework for regulation wouldn&#8217;t kill off innovation.<br />
What we should be regulating is the wealth we are creating through AI. How do you make sure that wealth is available for the welfare of everybody? That would be one question to ponder.<br />
Another one is this: If more work becomes automated, what should society look like? Maybe we will readjust so that we have more leisure time. Should we have 3 day weekends instead of 2 day weekends? The technology at a particular time also determines how the society is organized. When AI becomes an enabler of full automation, we&#8217;ll have to see what kind of social order will make that work.</p>
<p><strong>How far off is that? </strong>That&#8217;s definitely decades away, so we probably don&#8217;t have to worry about it now. When we develop new things, those tools also help us in finding solutions to the problems that arise. When we created computers, we also created viruses—but we can combat viruses with computers. Decades is pretty long compared to the technology cycle. We got personal computers only in the 1970s.</p>
<p><strong>When you say full automation is decades away, do you mean full automation in terms of doing tasks independently, or full automation in terms of self-improving AI? </strong>Full automation in terms of doing tasks.</p>
<p><strong>You mentioned how important innovation is to invest time and money in. You&#8217;re based in the US system</strong>—<strong>do you see a lag in Europe when it comes to keeping up, in terms of public and VC investment? </strong>It’s the ecosystem. It is the devil-may-care attitude of entrepreneurs who jump in and take risks—they have that tolerance for experimenting and failing. Getting to success involves many challenges, failures and reinventions along the way. I think that mindset and ecosystem—and being able to make money from that process—is important. Strong encouragement of that kind of ecosystem is necessary.</p>
<p><strong>What are you most excited about in terms of AI in the future?</strong> I’m excited about making robots do what a three year old child can currently do easily. Play in a Sandbox! That would be awesome.</p>
<p><em>— interview conducted by Josh Raisher</em></p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/some-jobs-might-be-done-entirely-by-ai/">&#8220;Some Jobs Might Be Done Entirely by AI&#8221;</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>The AI Revolution</title>
		<link>https://berlinpolicyjournal.com/the-ai-revolution/</link>
				<pubDate>Thu, 15 Mar 2018 17:38:27 +0000</pubDate>
		<dc:creator><![CDATA[Josh Raisher]]></dc:creator>
				<category><![CDATA[Berlin Observer]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Aspen Institute]]></category>

		<guid isPermaLink="false">https://berlinpolicyjournal.com/?p=6398</guid>
				<description><![CDATA[<p>There's a lot of talk about AI's potential—and a lot of worry about responsibility.</p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/the-ai-revolution/">The AI Revolution</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><strong>At the Aspen Institute&#8217;s &#8220;Humanity Disrupted: Artificial Intelligence and Changing Societies&#8221; conference, there&#8217;s a lot of talk about AI&#8217;s potential—and a lot of worry about responsibility.</strong></p>
<div id="attachment_6407" style="width: 1000px" class="wp-caption alignnone"><a href="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT.jpg"><img aria-describedby="caption-attachment-6407" class="wp-image-6407 size-full" src="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT.jpg" alt="" width="1000" height="563" srcset="https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT.jpg 1000w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT-300x169.jpg 300w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT-850x479.jpg 850w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT-257x144.jpg 257w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT-300x169@2x.jpg 600w, https://berlinpolicyjournal.com/IP/wp-content/uploads/2018/03/bIMG_2666_CUT-257x144@2x.jpg 514w" sizes="(max-width: 1000px) 100vw, 1000px" /></a><p id="caption-attachment-6407" class="wp-caption-text">© Landesvertretung Baden-Württemberg</p></div>
<p>Artificial Intelligence (AI) offers exciting potential but also requires policy oversight—the kind of policy oversight that, unfortunately, no government can realistically provide on its own. When Volker Ratzmann, state secretary for the federal state of Baden-Württemberg, opened the Aspen Institute&#8217;s “Humanity Disrupted: Artificial Intelligence and Changing Societies” conference at Baden-Württemberg&#8217;s representative offices in Berlin, he started with what would become an oft-repeated theme.</p>
<p>“AI presents huge challenges that we have to manage and regulate together,” Ratzmann said. “No country can manage this individually.” Kent Logsdon, the American Embassy’s chargé d&#8217;affaires, echoed this sentiment, saying that while governments should not centrally direct AI, there are also “serious issues to be discussed, and many will require leadership to seize benefits while managing risks.”</p>
<p>Over two days, a mix of researchers, industry representatives, and policymakers returned again and again to what humankind could achieve with AI and machine learning over the next several decades —the potential advances in medicine, transportation, and manufacturing, to name only a few—but also to the dangers it could unleash if left unregulated.</p>
<p>Joanna Bryson, a professor at the University of Bath and the Princeton Center for Information Technology Policy, emphasized the complete decoupling between productivity and wages that technology has contributed to, pointing out that as tech disassociates work from the location it is performed in, it will have to be regulated more by international treaty than national regulation. Nicola Beer, a German lawmaker and secretary general of the Free Democratic Party (FDP), added that AI would create as many jobs as it destroyed—or even more—but that it would also call into question the most basic functions of government, changing how we look at concepts like taxation and finance.</p>
<p>But oh, those self-driving cars. Autonomous vehicles also took center stage in several discussions. Anne Carblanc, the head of the OECD’s Digital Economy Policy Division, noted that self-driving cars could cut transportation costs by 40 percent, but will endanger 2.2 million to 3.1 million jobs in the US alone over the next two decades. From the industry perspective, meanwhile, Jeff Bullwinkel, an Associate General Counsel with Microsoft Europe, said the international computing giant shares these concerns—is optimistic that, with proper oversight, the benefits could be enjoyed even as the trade-offs are managed.</p>
<p>It was unclear, however, how exactly that oversight would work, or where it would come from. The researchers working on AI pointed out that artificial intelligence is itself only a tool, and that its ramifications will depend largely on how human beings apply it; in other words, AI doesn’t displace people, people do. Thus, it would be essential that policymakers take the lead on monitoring the development of our increasingly automated society and ensure that any threats that arise are addressed.</p>
<p>The policymakers, on the other hand, often pushed the same responsibility back to the tech sector, saying that it was the duty of companies to regulate their technological advances, and the responsibility of researchers to build accountability into their machines.</p>
<p><strong>The Innovation Gap</strong></p>
<p>In a panel titled “Driving Innovation: Autonomous Vehicles and the Future of Rail and the Open Road,” the consensus—as one might expect from a panel composed of AI researchers and transportation specialists—was that machine learning will dramatically enhance the efficiency of transportation networks. Yet Magnus Graf Lambsdorff, a partner at the venture capital company Lakestar, said he was less worried about the ramifications of autonomous vehicles and more worried that Germany, famous for its automotive industry, will not be the primary beneficiary: “The amount Germany invests [in AI research] is vanishingly small compared to in France or the United States.”</p>
<p>That was a sentiment heard often at the conference—Germany in particular has the skills to be a major global player in AI but, as a late adopter to all things digital, has failed to invest funds, time, and energy in the field. In a panel titled “Is Germany Ready for the AI Revolution?” there was much hand-wringing over why Silicon Valley and China have surpassed Berlin in shaping the technology of the future (and present, for that matter). MP Thomas Jarzombek noted that Germany lags behind in data processing and knowledge transfer, adding: “The problem is Germans love hardware. We’re a great engineering country but we’re behind on software.” (sic)</p>
<p>That might be headed for change on the European level, at least. A European Union representative delivered the news that the Commission will set up a European AI alliance as a multi-stakeholder forum, hoping it will become a global platform that will, among other developments, generate a comprehensive set of ethical guidelines for deploying AI. How to implement those guidelines (and get all 27 EU members to sign on) is another question.</p>
<p>In one of the more ominous sessions, a panel of defense representatives and researchers discussed whether AI has brought about the third revolution in warfare; Toby Walsh, a professor of computer science at the University of New South Wales, warned of the dangers of allowing “stupid machines the right to decide over life and death,” adding that autonomous weapons systems will be weapons of terror: “You can ask them to do anything, however evil.”</p>
<p>Both the head of the German military’s future analysis branch, Olaf Theiler, and Frank Sauer, a researcher at the Bundeswehr University in Munich, appeared to agree that the use of AI in analyzing data or visualization was welcome—anything beyond, however, including operational decisions and tactical planning, needs humans to play an active role. It remains to be seen how governments like Germany can and will react when adversaries—whether they be non-state actors or other governments—choose to employ autonomous weapons systems and other AI applications in conflict and warfare.</p>
<p>There was broad consensus, across all the discussions and panels, that AI has already become a part of our lives—think search engines, as Joanna Bryson pointed out, or transcription software. Now it will take inclusive dialogue between society, politics, research, and industry to decide what we want to do with artificial intelligence, and quickly. Because as Frank Kirchner of the German Research Center for Artificial Intelligence in Bremen put it: “There is no law of nature that says that robots can’t become more intelligent than us—we have to make sure we don’t become less intelligent.”</p>
<p><em>NB. Berlin Policy Journal was a media partner for the Aspen AI2018 Berlin conference.</em></p>
<p>The post <a rel="nofollow" href="https://berlinpolicyjournal.com/the-ai-revolution/">The AI Revolution</a> appeared first on <a rel="nofollow" href="https://berlinpolicyjournal.com">Berlin Policy Journal - Blog</a>.</p>
]]></content:encoded>
										</item>
	</channel>
</rss>
