A bimonthly magazine on international affairs, edited in Germany's capital

The Devilʼs in the Detail

SHARE
,

A flurry of AI ethics guidelines have been published this year, by the EU, the OECD, and Beijing. But there are many stumbling blocks ahead before binding rules can be implemented.

© REUTERS/Jason Lee

Artificial intelligence (AI) is turning into an essential enabler for economic and military affairs. It has also become the tool of choice for surveillance activities in certain countries. Against this backdrop, governments, international organizations, and corporations have been drawing up guidelines on the ethical design and usage of AI algorithms and data.

In 2018, major technology companies already drafted related principles, which is hardly surprising as AI innovations nowadays mostly originate from the private sector. Google published AI at Google: Our Principle, while Microsoft wrote Microsoft AI Principles. Yet their data-driven business model and their commercial interest in AI fuel distrust. Critics accuse them of “ethical white-washing.” The reproach is that their published guidelines are nothing more than a marketing gimmick which aim to distract from their abusive and massive application of AI algorithms.

Irrespective of whether these accusations are true or not, there is an urgent need for stakeholders other than “profit-driven players” to become genuinely engaged in the AI ethics debate. In April 2019, the European Commission released its “Ethics Guidelines for Trustworthy Artificial Intelligence.” These guidelines were drafted by the 52-member High-Level Expert Group on Artificial Intelligence (HLEG AI), which consists of representatives from politics, industry, research institutions and civil society. The document encompasses seven guiding principles, among them transparency (the traceability of AI systems should be ensured), privacy and data governance (citizens should have control over their own data) and diversity, non-discrimination and fairness (which tackles the bias problems of AI systems).

In May, the Organization for Economic Co-operation and Development (OECD) released its AI ethics guidelines, the “Recommendation of the Council on Artificial Intelligence.” Even though the document is shorter than the EU one and lighter on detail, its principles are noticeably similar. Later that month, the Beijing AI Principles were announced by the Beijing Academy of Artificial Intelligence (BAAI)—an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government—in a joint effort with several Chinese research institutions and industrial groups involving firms like Baidu, Alibaba and Tencent. In comparison with the guidelines provided by the EU, these principles are more descriptive and less comprehensive. However, they cover three crucial clusters: research and development, use, and governance.

Promising Signals

At first sight, it is a welcome development that major international organizations and powerful states are officially looking at ethical concerns about AI. And indeed, it is possible to identify positive aspects for each of the released AI guidelines and their wider significance: the EU document has great scope and is deliberately defined as a living document to be reviewed and updated over time. Given that AI systems are subject to constant changes and need continuous adjustment, such a mechanism is indispensable. The EU also includes a checklist with easy-to-understand questions that companies can used as points of orientation to ensure that ethical concerns are respected.

With regard to the OECD recommendations, it worth noting that—even though it is a non-binding document—it is backed by the United States. This means that the Trump administration is officially voicing ethical concerns about AI at an international level, despite its skepticism toward multilateralism. In addition, these recommendations are not limited to the 36-member states of the OECD—six non-members having already also embraced these principles. As a follow-up measure, an AI Policy Observatory will be established to help implement and monitor adhesion to these principles throughout the world. Based on these recommendations but with a more limited scope, the G20 meeting in Japan this June agreed a set of G20 AI Principles. Both the US and China were signatories.

Last but not least, there the promising sign of the Beijing AI Principles. It was surprising and gratifying to see that China’s government—which is widely criticized for using AI as a tool to monitor and grade citizens—is suddenly interested in ethical concerns and that, for instance, research and development of “AI should serve humanity and conform to human values.” This can be interpreted as a signal that China wishes to become engaged in a dialogue with international partners in spite of the increasingly powerful narrative of an “AI race” with the United States.

Stumbling Blocks Ahead

Nevertheless, it would be premature to speak of a new era of AI multilateralism and an effective AI ethics framework. The recent haste in drafting AI guidelines is partly motivated by the desire not to be left out of the conversation and the “standard setting game.” It marks the start of a likely long-running debate within the international community, with many stumbling blocks ahead. A small sample of these lingering challenges follow:

First, the devil will be in the detail, as the principles presented by all sides are still very vague. Even the most comprehensive and detailed guidelines—the ones drafted by the EU—fail to set non-negotiable ethical principles or so-called “red lines.” This was even criticized by one of the members of the HLEG AI, the philosopher Thomas Metzinger. At present, all of the three principles are more about opening up new thematic areas such as non-discrimination or robustness and safety to international discussion. Taken together with the fact that none of these principles are enforceable by law, it means that countries continue to have a lot of room for maneuver in their application of AI systems.

Second, the application possibilities for AI are too widespread for a one-fits-all approach. Different circumstances require different solutions. More specific application areas like manufacturing, surveillance, and the military need additional guidelines.

Third, ethics is always embedded in a cultural and social context that depends on a system of values shaped by a unique history. Since algorithms will impact many areas of our everyday lives, these cultural differences must be taken into account when drafting AI ethics. For instance, studies show that people in China and in the West have quite different responses to the famous “Trolley Dilemma,” a thought experiment in ethics that forces participants to make a difficult choice between a greater and a lesser evil.

Ultimately, such culture clashes will also be reflected in international politics. It will be a huge challenge to find common ground, especially if the international community seeks to develop more detailed principles and guidelines. Bringing in additional stakeholders and transferring what are ultimately ethical principles into hard law will be just as difficult.

Great Power Rivalry

Finally, and in addition to the challenges related to process, content, and implementation, there is a need to take the geopolitical context into account. This is true for the new technologies in general, but especially for general-purpose tools like AI. The great power rivalry between the US and China has only just begun, and emerging technologies with dual-use nature will be the main driver for economic profitability and military prowess. Hence, it is highly doubtful whether the so-called AI superpowers—first and foremost Beijing with its current demonstration of AI-based surveillance on minorities—will be willing to bind themselves in “ethical chains” through a self-imposed ethics regime. This is made evident by the reluctance of these countries to ban lethal autonomous weapons systems.

That is why it’s imperative that the EU continues to take the lead in the global debate on AI ethics in order to see the emergence of its “third way”—a digital sphere that is human-centered, regulated, and democratic. Yet setting high ethical standards is not enough. The EU and its member states also need to do more to establish a vibrant European AI ecosystem. This means not just encouraging additional investment, but also, among other measures, supporting European companies that develop AI systems. Otherwise, the EU will end up proclaiming and promoting detailed and sophisticated AI ethics guidelines without having any leverage to implement them internationally.