The manipulations of the US elections via social media have thrown German politicians in a spin, says Simon Hegelich, professor of political data science. They would do well to prepare for disruptions.
Professor Hegelich, what are “social media bots”? Social media bots are fake accounts on social media platforms. They try to mimic the appearance of normal users, but they are actually controlled by underlying software. The first bots were probably for mainly commercial uses – they were very simple, basically just advertisement. But many state agencies, especially in the United States, were very quick in trying to use these types of bots to manipulate public opinion or at least to find out if this were possible. The Guardian reported as early as 2011 that US agencies were developing “sock puppet software.” And there were definitely a lot of social bots active in the Arab Spring movement, where they were applied against dictatorial regimes.
In Germany and Europe, however, they seem to be a new phenomenon. We found bots on Twitter attacking German Chancellor Angela Merkel two years ago. Most of these came from the US and were connected to the so-called “alt-right” movement. This doesn’t necessarily mean someone was trying to manipulate public opinion in Germany. The refugee crisis was a big political topic in the US as well; it was discussed and stoked by social bots there, too.
Last November you briefed Chancellor Merkel on social media bots, trolls, and fake news – basically, the tools that could be used to manipulate this year’s election. What kind of reception did you get? What thrilled me is that Chancellor Merkel is so interested in the topic and very well-informed. It was more like a scientific debate. My impression was that especially Merkel is taking the whole topic of digital manipulation very seriously. And she’s not looking for simple solutions, she really wants to discuss this topic with her CDU party and convince them that something very disruptive is going on in the political sphere.
So how worried were they? How worried should they be? Everyone is getting particularly anxious because of what happened in the US. There’s no need to panic, though. It’s very difficult to influence people’s political opinions, no matter what tool you use. You can’t use a bot army to write “Lock Merkel up” online and actually believe someone will read it and think, “Oh, yes of course, Merkel has to go to prison.” The effects are far more indirect.
What is the effect, then? One danger is that you may see that a certain topic is very successful on social media and deem it very important, but in the real world it isn’t. Journalists and also politicians are taking trends from social media that don’t actually exist and are making poor decisions based on that. For example, do you really think so many people care about a remake of the Ghostbusters movie with an all-female cast? The huge social media controversy it sparked last year felt somewhat overblown. Or – far more serious – did the outrage online during the Arab Spring really reflect the opinions of the majority? Another danger is that bots can lead to polarization in rhetoric and discourse because they are very aggressive in social media debates. That could lead to a situation where bots only engage with other radical users, and more moderate people just exit the discussion entirely.
You have to differentiate. Most bots aren’t even political – they’re driven by economic interests. In the US, some pro-Trump sites didn’t actually want Trump to win, they just found out you got more clicks that way. The same is true with bots. Some studies counted bots in relation to hashtags, and they found there were 400,000 pro-Trump users. But upon closer look, the majority are just spam bots pushing links to Russian video games, for example, without any political message. My point is, you have to differentiate between spam and noise.
At the same time, attribution is really difficult – it’s a worldwide economy. Say you need fake user accounts, and to register a lot of them you have to bypass the captcha code. You might have an office in Pakistan bypassing the captchas, and then you have a fake account generator that might be Dutch. You run your servers through the United Kingdom because they’re cheap there. You use software programs in Russia and manipulate in the US – it’s an international chain of production. This also makes it very hard to say who is behind the manipulation.
German politicians are already discussing countermeasures like fining those who spread fake news or setting up a kind of fact-checking clearing house, which some denounce as an Orwellian Ministry of Truth … There is a lot of activism. The US already has a new law for this. It has been one of the final acts of the Obama administration, so they are actually creating a sort of Ministry of Truth. I’m not sure that this will be particularly helpful, but I think it is very important to discuss the issues and increase the pressure on Facebook especially, making sure they are part of the solution. And if we are just talking about normal people, then all the necessary laws are in place: slander is an offense, be it on Facebook or anywhere else. But in the case of cyberoperations and large scale cyberattacks, it’s not a matter of law, but a matter of counter-measures.
The outgoing Obama administration has enacted some of those, making Russia responsible for the hacking into the Democratic National Committee servers … I’m skeptical as far as the “blame Moscow” narrative is concerned. If, for instance, the e-mails that were published by WikiLeaks had really been taken by a hack, then the National Security Agency (NSA) would be able to say from which IP address they came and where they went. WikiLeaks’ founder Julian Assange has said that they were leaked. And the fact that Russian malware was used only goes so far. If I wanted to hack something I would use Russian malware as well because you can easily buy it online …
So we should be careful pointing fingers but that shouldn’t stop us from fighting the attacks? Exactly. Then again, we should all be certain that Russia, China, indeed every country in the world is trying to get a handle on this. It’s likely there is this activity in Russia, but I doubt that the WikiLeaks stories came from Russia.
What role is WikiLeaks playing? Can we expect more data dumps with relevance to Germany? WikiLeaks is still doing what they were doing ten years ago. They’re publishing all kind of material they think is relevant and they don’t really care where they get the material from and care even less about political consequences. And we will definitely see more dumps, even though the reporting on them is not always accurate. When WikiLeaks published more than 2,400 documents on the collaboration between German intelligence services and the NSA from a Bundestag investigative committee on December 1, 2016, many people thought this material was taken during the hack of the Bundestag of 2015. But as far as I know the material that was published by WikiLeaks is over 90 gigabytes, while the whole amount of data transferred during the Bundestag attack was 16 gigabytes. If this is correct, then the latest WikiLeaks drop had nothing to do with hacking, it was a leak. Either way, there will be more of it this year – WikiLeaks has said as much, as has the famous German hacker of Megaupload fame, or infamy, Kim Dotcom, who is in New Zealand fighting extradition to the US.
Is the German government committing enough resources? We’ll find out over the next months. There is a lot going on, but I’m not sure it’s going in the right direction. For example, the German army is trying to build up a cybersecurity center and recruiting a lot of experts from various universities for each different aspect of cybersecurity. The Federal Office for Information Security (BSI) is quite busy, too. My impression is that many were very surprised by the US elections and their manipulation. And now they have just started to get active.
How influential is fake news? Fake news is where you really see disruptive change in public opinion. WikiLeaks’ slogan was, “If lies can start wars, the truth can start peace.” Now we have a situation where you have information and counterinformation for everything, and you never know what is true and what isn’t. People who get their information from social media only trust their own networks. This phenomenon isn’t completely new. There have always been political movements or parties that deny facts. But with fake news, it’s easier to be destructive in social media. It’s much easier to spread lies. You can spread more fake news in the time that it would take to understand one real news story. Also, dramatic news, even if it’s fake, gets a lot of interest. Like the story that Pope Francis was supporting Donald Trump.
Why are people so easily manipulated by fake news? I think the problem is we all still have to get used to social media in some ways. We still think quantity and quality are connected. If we read something twice, four times, twenty times, we start to think there must be something behind it, even if it’s nonsense. And the problem with fake news is, even if it’s proven to be false, there always remains a glimmer of doubt. Also, because everything is connected on the internet, we don’t have independent information anymore. Even a journalist trying to verify a story with two independent sources might have the problem that he or she cannot be sure they aren’t somehow digitally connected. During the shopping mall shooting in Munich last year, a well-known terrorism expert spread news of further – it turned out, fictitious – attacks in the city. Apparently, he had two independent sources on this, but they in turn relied not on firsthand information but on Twitter. This is a big problem in everything we do.
Populist parties like the Alternative für Deutschland (AfD) are very active on social media and attract high numbers of followers. Does that mean populists are more media-savvy? First, there is a good reason the AfD and its supporters are using social media, because they consider the established media to be biased against them and not reliable. Therefore, they look for different channels. Second, a lot of this social media activity around AfD, Pegida (the anti-Islam, xenophobic movement that first emerged in Dresden in 2015), and the new right movement in Germany is created by very, very few accounts, or by very few people behind many accounts. There is a lot of automation or manipulation of these social media trends, especially when it comes to these parties. Half are fake. I can’t prove it for every user but there are users systematically liking every post on every Pegida page. Suddenly you have users that like 30,000 posts a month.
It doesn’t mean they are all bots. There are also trolls or people who are very engaged, sitting at their computers for hours on end posting and liking. We identified one pensioner in Erfurt who has spent at least eight hours a day for the last one-and-a-half years in front of his computer, writing hate posts against refugees. He’s not even getting paid. He thinks he serves Germany that way.
So the type of election campaign we saw in the US, extremely polarized and targeting individual voter groups on social media and manipulating opinion – are we going to see that in Germany this year too? Society in Germany isn’t as polarized or as segregated as in the US, so it will definitely be different. But our election system would allow for targeting different voter groups. I think 2017 might be the last more or less traditional campaign we’ll see in Germany. And it’s very important to have a real discussion about what’s going on in the public sphere beyond the election of 2017 because I think we’re about to witness a disruptive change in public opinion and democracy. I’m really wondering if the public sphere is changing fundamentally. Will we still have elections that are equal, free, and secret? Because all of this is definitely going to change voting – maybe for the good, but right now it looks a bit frightening.
Read more in the Berlin Policy Journal App – January/February 2017 issue.