Disinformation: One of the Greatest Threats to European Democracies

15 February, 2021

The automation of the production and circulation of disinformation has never before been so easy and cheap. We now have automated social bots that can rapidly disseminate a post containing disinformation on average more than fifty times a day. However, their role is not limited solely to that; they are also used to rapidly generate ad revenue by directing traffic. We have only recently started to awaken to the consequences of this technology for our societies and democracies. We have also only recently begun to question the power of the Silicon Valley giants, and in turn, have grown wary of the economic profits they make at the expense of our democracies. This is a transnational issue, one that transcends borders and continents. It also mainly plays out on a single, albeit large, platform, the internet, that is not controlled by any one nation. 

During the US presidential elections in 2012, a team from the University of Oxford’s Computational Propaganda Project led by Samantha Bradshaw found that “professional news content and junk news were shared in a one-to-one ratio, meaning that the amount of junk news shared on Twitter was the same as that of professional news”. Meanwhile, in Germany this ratio was one-to-four and in France one-to-two during their presidential elections in 2017. This does not mean that Europeans are more critically engaged with social media, but rather that this capability has not yet reached its full potential in these regions. Moreover, do not think that countries with low rates of social media use were safe: they have just as many bots (appropriately named “sleeper bots”) ready to activate with an increase in users. 

A Threat to the Pillars of Democracies
Representation in Elections

One core pillar of democracy is equality, where all citizens of eligible age have the same opportunity to elect its representatives to the government. Foreign-imposed disinformation is one of the greatest threats to European democracies because it aims at voter suppression and thus affects electoral turnout (representation). Disinformation attempts as observed in recent years, have been a direct threat to this core principle. It is done by spreading false claims that polling stations are closed, that voting is possible online or through phone, or that elections are delayed. Following the latest European Parliament elections in 2019, the European Commission released a report in which it explicitly stated that “efforts were identified, for instance to suppress voter turnout through attacks on government websites and to disseminate manipulative content”. Tytti Tuppurainen, the president-in-office of the Council, also reiterated that “fact-checkers and academia have nonetheless flagged malicious activity from foreign sources to influence turnout and voter preference”, highlighting once again the vulnerability of European elections to disinformation from foreign actors and the severity of the problem. This is indeed a significant problem in the context of European Parliamentary (EP) elections, given that there is a historically low turnout. Given this context, the mobilisation of core party electorates is key, especially when there are many swing and undecided voters. Targeting such voters would mean that “political groups with anti-European sentiment or radical political views can be overrepresented due to the substantially lower number of votes required for an EP mandate”. 

For an actor with an intention to undermine organisations and institutions like the European Union, exploiting low turnout trends and further suppressing turnout in order to destabilise pro-European majorities in a number of Member States is a low-hanging fruit. Further, lowering the EP election turnout could also reaffirm its image as a ‘second-order’ election and be a justification to question the institution’s legitimacy of the legislature. 

Individual Autonomy and Informed Decision-Making

Another key pillar of democracy is individual autonomy, where citizens are allowed to make democratic decisions for themselves in elections. However, a democracy cannot be reduced only to elections. Democratic decision-making is not simply about electing representatives but also about the communicative exchanges which occur between citizens in order to become informed of their own and others’ preferences, and which therefore legitimises overall decision-making. Epistemic cynicism occurs where citizens oppose or become indifferent to knowledge produced and shared by higher epistemic authorities such as experts and academics, or reputable journalists. This is indeed much more complex than it seems, as disinformation campaigns do not only involve the spread of false information, but also the fake versions of these institutions whereby actors impersonate reputable sources and promote competing claims. During the COVID-19 pandemic, Russian disinformation campaigns were reported to target European citizens by claiming that the EU, NATO and European governments had “failed to protect their populations or selfishly refused to render needed aid to their less fortunate European partners’‘, in order to sow distrust in political leadership and expertise. It is especially problematic when disinformation is aimed at undermining democratic institutions through “allegations of voter fraud, election rigging, and political corruption”. This process of dismantling credibility and trust in official and reputable sources of democratic knowledge as well as the creation of confusion in information attribution through fake institutional accounts manipulates and interferes with the democratic decision-making process, and thus individual autonomy where individuals vote based on their own decision-making processes.

Inclusion and Representation in Democratic Deliberation and Processes

Foreign-imposed disinformation is also one of the greatest threats to European democracies because it  leads to the unjustified inclusion of actors in democratic processes. This can be harmful for a deliberative democracy if it leads to people distrusting a discussion or medium of political communication due to pervasive inauthenticity. This is the belief that a number of interlocutors within this democratic deliberation do not have real identities and are instead composed of trolls, fake accounts, foreign agents and even automated bots. For instance, on December 8, 2018, a letter appeared on social media allegedly written by French actor, Gérard Lanvin, who allegedly criticised President Macron and his government, condemning public officials saying that those who “from birth to death live off of public funds, enjoy special social security benefits and are exempt from taxes, should at least have the decency of not talking about equality”. The post was shared over 251,000 times, and was viewed 6.4 million times. The letter was in fact not written by the actor, who reported to the authorities for identity theft. Yet, the damage was done, even though Facebook removed it from its platform, especially since many variations of the same content still continue to circulate the internet. 

More alarmingly, disinformation campaigns can also displace the inclusion of legitimate members. This is done through outcompeting legitimate voices by using bots which can disseminate content at unnatural speeds, multiple fake accounts and advertising. Although counter-arguments to counter-disinformation often invoke the freedom of speech and the threat of censorship, one could also argue that disinformation creation and dissemination is not freedom of speech, and that there is no such notion as “freedom to unnaturally amplify one’s voice through automation and fake identities” which is closer to notions of fraud. This is not to say that there are no cases where legitimate information is censored as disinformation. To get around this, there could be a line drawn between the freedom to create information and the freedom to mass-disseminate information at disproportionate speeds with the use of bots or advertisement. Therefore, we would not come to view censorship as a justified solution. Instead,  the amplification of disinformation by the algorithm systems would be problematised through potentially legal repercussions. To get back to unjustified inclusions in democratic processes, it is also important to examine how they “devalue the contributions of legitimate members of the polity” and lead to an environment of distrust in the political communication landscape where “the accusation of an account being a ‘Russian bot’ [has become] a common dismissive reply online”. Therefore, the problem is not only the spread of inauthenticity, but also the perception of inauthenticity. As the lines have blurred between real and fake, it is now also difficult to establish which voices are included or are missing from deliberative processes, and whether a discussion really involves the true perspectives of the people affected by an issue. 

(Dis)trust in Democratic Institutions

Disinformation is also a threat to European democracies because it exploits existing divisions, and leads to further polarisation. By impersonating different fake personalities, disinformation disseminators thus contribute to ‘techno-affective polarisation’. This refers to the way in which actors use social media platforms to spread not only disinformation intended to mislead, but also to “stoke moral revolution toward particular individuals (such as electoral candidates or journalists), political parties, and social groups”. Alongside the use of fake identities, disinformation disseminators in such instances can falsely misrepresent the views of one social group about another, “denying them [social groups] the capacity to author their own claims”. Even if it may not be demonstrable that disinformation creates new divisions in democratic societies, it is nonetheless empirically possible to measure whether it can exploit the existing divisions through hyper-polarisation. 

For instance, in October 2017, Catalonia, a semi-autonomous region in Spain, held an independence referendum without Madrid’s consent. Ultimately, violence ensued between police forces aiming to prevent the referendum and civilians. Soon, there were growing concerns over foreign disinformation campaigns, with unverified claims and pictures circulating social media to magnify state brutality against secessionists, with several English-speaking pro-Russia Twitter accounts hijacking the online referendum discussion, magnifying police brutality and boosting the #catalanreferendum hashtag by 7.500% that day. Concerns over this episode of foreign-imposed disinformation reached the European Council. A study conducted by the European Parliament claimed that, “if the referendum itself and the political gridlock that ensued were driven by deep-seated political, economic and social divisions within the country, the case of Catalonia’s botched independence votes exemplifies how effective foreign operators can exploit already-existing tensions and reinforce them through careful manipulation” highlighting the severity of the problem for democracies once again. By stoking hyper-polarisation, mutual respect and tolerance towards one’s independent judgment and authority to decide on one’s vote, key elements of democracy, are deeply threatened. Yet, increasing polarisation is also a threat to democracy because increasingly opposing views can move groups from simply disagreeing to distrust where the others’ participation in debate and society is rejected. 

What To Do 

The underlying issue that needs to be addressed are the primary motivations for creating disinformation: financial gain and political gain. For the former, the problem needs to be curtailed by addressing both the supply (producer side) and the demand (consumer side) for disinformation. How, to do this, is of course the more difficult question to answer. Google and Facebook already claimed to have implemented some changes to their systems. Google worked on the prevention of revenue flow to the owners of “bad sites, scams and ads” and banned hundreds of publishers from its advertising network AdSense. Facebook claimed they have “taken action against the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications.” As good as this sounds, these initiatives fall victim to a major technological disadvantage, of being vulnerable to swift counter-measures developed by disinformation disseminators. For example, in regards to the Google and Facebook cases, Dr. Claire Wardle from Harvard’s Shornstein Centre claimed “anecdotally, disinformation creators have explained that, while they experienced short term losses in revenue earlier in the year, they have returned their profits to previous levels using other ad networks that are willing to partner with them.” More importantly, there is no incentive by the tech giants to change the rules of the game since they rely immensely on social engagement. 

But what about the demand-side, the consumers? Indeed, Mark Zuckerberg asked the same question, and claimed that people should ultimately decide what is credible. Are we really that gullible to fall prey to disinformation so easily? Current research shows that yes, we do have a psychological predisposition to accepting disinformation. Across all educational levels, the audiences of a Fake News Game got fifty percent of the questions wrong. This is reflected in how disinformation on average spreads six times faster than real news on social media. For platforms that financially and existentially rely on user engagement, it therefore is logical to amplify such kinds of information over truth. Although the recent craze over fact-checking appears to be in the right direction, it still does not prevent the creation or spread of disinformation. In fact, according to The Science of Fake News , the efficacy of fact-checking websites may even be counterproductive, as research conducted on recalling information and familiarity bias revealed that when fact-checking sites repeat false information, even if to denounce it, they “may increase an individual’s likelihood of accepting it as true”. In the past year, Twitter revealed that the Conservative party in the UK misled the public by changing the handle of its press office account from @CCHQPress to @factcheckUK and posing as an independent fact-checking service throughout the leaders’ debate. Such stunts and general problems with fact-checking aforementioned gives us a very difficult reality to work with, especially if we accept that our psychology does not always work in our favour. That is why technological or educational approaches by themselves can only help us get to some level of efficiency but not all.

There is also political gain to be made from spreading disinformation. For instance, in Germany 2016, it was falsely reported that a thirteen year-old Russian-German girl had allegedly been kidnapped and raped by two migrant men. The allegations were proven to be untrue, but not before the Russian media spread it widely and the Russian Foreign Minister publicly accused the German government of covering it up. The disinformation ignited a nationwide debate over the resettlement of Middle Eastern refugees, as well as protests over the government’s handling of the case. The disinformation fuelled growing anti-immigrant sentiments at a time when the government and public were divided over the issue. It also aimed to politically undermine the German government, by encouraging mass demonstrations. Other examples of disinformation for political gain are unfortunately plenty, such as Indian far-right figures sending false claims about religious minorities on messaging applications to spark communal violence and to take advantage of growing polarisation between communities. In Burma, the ultranationalist Buddhist monks were also cited to have spread disinformation on social media in order to mobilise supporters for violence against the Rohingya communities. It is clear that much is at stake here: our democracies, our fundamental beliefs in human rights, and peace. 

The EU’s Approaches

Addressing the problem through legislation is tricky. Governments with authoritarian leniencies have used the problem to introduce laws that shut down their opposition or human rights activists, something for which the Human Rights Watch has raised the alarm. Besides Singapore, Malaysia and the Philippines, Germany’s Network Enforcement Act (NetzDG), which requires large social media companies to remove ‘illegal content’ immediately from the platforms, has also come under scrutiny for opening the gateways to censorship. Clearly, the concern here is not unwarranted. Laws are also difficult to enforce, especially when for example, one social media company operates by way of multiple companies, or has its main office in one country, but servers in another. 

The problem is obvious, the scale of it is alarming, and the EU is rightfully concerned; the EEAS Strategic Communications Division has been tasked with two specific responsibilities in regards to disinformation. The first one, the Communications Policy and Public Diplomacy pertains to outreach activities of the EU and its external audiences by providing training and guidance to EU delegations. The second division, the Task Forces and Information Analysis focuses instead on the Western Balkans and Europe’s eastern and southern neighbourhood by conducting political advocacy and engaging in cultural diplomacy. Through task forces such as the East StratCom Task Force, it has a mandate to target disinformation and foreign interference campaigns. An outcome of the Task Force has been a weekly review of pro-Kremlin disinformation targeting the EU, by way of the EUvsDisinfo website.  Furthermore, the EU launched a Code of Practice on Disinformation in September and October 2018 in collaboration with multiple private companies. It served as a self-regulatory experiment by the tech industry, to volunteer to make commitments in regulating online advertisements, political advertising, integrity of services, transparency for consumers and transparency for researchers. As it was a voluntary exercise that was not externally verified, the results, which will be used to proceed further policy responses, will need to be taken with a pinch of salt.

Two months after the Code of Practice, the European Commission launched the Action Plan Against Disinformation. It led to The Rapid Alert System, launched in March 2019 to enable awareness of disinformation spread across EU member states, by connecting various disinformation monitoring capabilities real-time, within and outside the EU. A report by Carnegie Europe however points to the problem of how there are only a few highly-engaged  Member States that report disinformation. Furthermore, EU affiliated election observation missions have been established. Through a developed methodology, these missions have been monitoring online political campaigns in Peru, Sri Lanka, and Tunisia as a test, which will then become a standardised routine for future elections and missions. More recently, the Commission has been developing two policies: 1) the 2020 – 2024 European Democracy Action Plan which is highly likely to involve policy commitments regarding election monitoring and disinformation and 2) a new Digital Services Act that builds on current e-commerce rules and which sets regulatory powers for the EU over digital platforms. 

However, thus far, there have been no adequate investigations into particular cases so that malicious actors and activities can be rightly punished: despite multiple known election interference attempts, there have been no legal investigations in Europe like the Mueller investigation, set to detect who the malicious actors were and what their intentions have been. Moreover, there are many experts who argue for an international outlawing of disinformation, as well as the creation of a framework to coordinate moves by Western democracies to punish cybercriminals. Accepting disinformation as a security threat by the EU would be the appropriate way to start. For one, it would allow the appropriate funding to be provided to the task of solving the problem. It would also allow the intelligence capabilities of a nation to combat the problem in a more cohesive way, perhaps even through coordination with other nations. 

Foreign-imposed disinformation undermines trust in democratic institutions and increases hyper-polarisation. Disinformation can fundamentally dismantle core democratic elements and processes such as freedom of information, representation and trust in elections and democratic institutions and processes. More crucially, it targets deliberation, where citizens can no longer make informed decisions and participate in democratic discussions. Understanding the key policy issues and to be able to form a position on these issues is vital for democracy, as otherwise the legitimacy of democratic governments and their decisions would be questioned. Ultimately, the question still remains as to whether the EU, big technology companies, educators and legislators can really be ahead of the problem. In the absence of regulation to require more social media transparency over such issues, it seems that tackling foreign-imposed disinformation and maintaining trust in democratic institutions will become even more difficult, especially in the coming age of deep-fakes and more sophisticated technology. For now, it is very clear that European and all other democracies must build resilience in form of media literacy, introducing adequate legislation and in actively communicating the reasons and modes behind disinformation with the public to encourage disinformation inoculation, a concept whereby exposure to knowledge about how and why mis- and disinformation is created can lead to public resilience. Although it is more difficult to do something about political incentives against disinformation, the financial incentive can still be targeted so that individuals and lone-actors do not feel incentivised to spread disinformation. Placing it under the umbrella of security threats could also hasten the solutions. The puzzle is a long way from being solved. The price tag on our democracies, and threats to our international security, in the meantime can continue to be determined by digital platforms owned by tech giants in Silicon Valley. 

Share this: