top of page
Search
Writer's pictureStephen Kinsella

Fake accounts and election interference - our submission to the EU Commission’s consultation on DSA guidelines for election integrity

An unusually high number of elections are taking place around the world in 2024, including India, the USA, the European Union, and probably the UK. That means a healthy information environment, where citizens are able to access accurate information and debate issues freely without interference or manipulation, is particularly important. 


Social media platforms could, and should, enhance the information environment for democratic participation, enabling citizens to access information and debate issues freely. However, in practice, recent elections have seen mounting evidence of social media contributing to a degradation of political discourse and a destabilisation of democratic institutions. 


The European Commission has therefore correctly identified, as part of its Digital Services Act, the need for platforms to be required to act to protect the integrity of elections. In the run-up to the European parliamentary elections, it launched a public consultation on proposals for election integrity guidelines.


There’s a lot to welcome in the Commission’s proposals. First and foremost, it’s hugely welcome for a regulator to be focusing directly on the risks posed to elections by badly designed social media platforms. This is a strength of the DSA, and other democratic governments and regulators around the world should take note. Many of the Commission’s specific proposals are sensible and should make some difference. This includes for example the suggestions for platforms to introduce “positive friction” such as ‘think before you share’ messages and limits on resharing, and the application of labels providing contextual information to posts and posters, including information provided by independent fact checkers.


However, we’re concerned that the measures currently being proposed will not be sufficient to prevent threats to the integrity of the European elections. This is particularly the case given that these elections are taking place in a context not only of platforms which are not designed to prioritise safety, accuracy, or healthy debate - but also of organised operations, including from hostile states, seeking to undermine and destabilise European democracy. 


Social media platforms, left unregulated, make design choices which serve an advertising business model which ruthlessly prioritises engagement and user numbers. It is this focus which has led them to use recommender algorithms which promote and incentivise extreme and emotive content rather than accurate information or balanced journalism. It is also this focus which leads to laissez-faire approaches to account creation and user authenticity - because all accounts, fake or real, contribute to the number of “eyeballs” which can be sold to an advertising customer.


Influence operations may seek to promote support for a particular political party (usually one with eurosceptic or anti-NATO leanings), promote particular points of view (e.g. justification of Russia’s invasion of Ukraine or hostility to Ukrainian refugees) or simply to stoke division and instability (e.g. over immigration, or covid, or climate change). In all cases, such operations rely heavily on fake accounts, which conceal the true identity and affiliation of the account operator. Such accounts are used to seed and amplify misleading content, and to manipulate conversations through inauthentic comments and reactions and the creation of fake groups or pages. They serve both to deceive authentic users directly, and to manipulate recommender algorithms to further amplify the content. 


The issue of foreign interference and electoral manipulation using fake social media accounts first came to prominence quite a number of years ago now, most notably during the 2016 US presidential election. Since then, facing criticism in the media and from some politicians, the platforms have trumpeted their efforts to detect and remove networks of fake accounts. However, it is clear that the problem of influence operations targeting elections remains significant, and the extensive use of fake accounts remains core to how these influence operations work. To offer a few recent examples:


  • In September 2022 Meta reported having taken action against a “Russian  network [that] targeted primarily Germany, France, Italy, Ukraine and the UK,  with narratives focused on the war in Ukraine and its impact in Europe. The  largest and most complex Russian operation we’ve disrupted since the war in  Ukraine began, it ran a sprawling network of over 60 websites” - but only after  the network had first been documented by German investigative journalists at  public service broadcaster ZDF

  • In its report under the DSA Disinformation Code, from March to June 2023,  TikTok disclosed having detected and removed 5,885,958 fake accounts  purporting to be within the EU, which had accumulated 47,409,587 followers  at time of detection and removal. This included 9,246 accounts, which had  accumulated 356,935 followers, purporting to be in Slovakia - in the run-up to  crucial elections. In the same period, Meta reported that “fake accounts represented approximately 4-5% of our worldwide monthly active users (MAU) on Facebook”.  

  • In July 2023, inauthentic accounts were found to have played a key role in  influencing online political discourse around the Uxbridge and South Ruislip  by-election in the United Kingdom. Inauthentic accounts greatly amplified  hostility to the London Mayor’s “Ultra Low Emission Zone”. Not only did this  potentially influence the result, a narrow surprise win for the Conservative  Party, but it also appeared to influence the direction of government policy,  with the Uxbridge and South Ruislip result linked to subsequent changes in  the tone and content of UK government’s climate policies.  

  • In September 2023, Microsoft reported having uncovered Chinese social  media influence operations that “deploy thousands of inauthentic accounts  across dozens of websites, spreading memes, videos, and messages in  multiple languages”. It warned that “ahead of the 2022 US midterms,  Microsoft and industry partners observed CCP-affiliated social media  accounts impersonating US voters—new territory for CCP-affiliated IO  [influence operations]. These accounts posed as Americans across the  political spectrum and responded to comments from authentic users.” The  report noted that “unlike earlier IO campaigns from CCP-affiliated actors that  used easy-to-spot computer generated handles, display names and profile  pictures, these more sophisticated accounts are operated by real people who  employ fictitious or stolen identities to conceal the accounts’ affiliation with the CCP”. 


Despite years of evidence of the risks to elections, and years of talking up their post-hoc detection and removal efforts, platforms remain extremely vulnerable to manipulation by bad actors using fake accounts. The rapid development of generative artificial intelligence technologies is likely to increase the scale and sophistication of the manipulative content which influence operations are able to produce - so the piecemeal, post hoc measures which have failed to tackle the problem so far are likely to become ever less effective.


Clean Up The Internet has therefore submitted a detailed proposal to the EU, making the case for requiring platforms to implement optional user identity verification schemes. Such schemes should offer their users an option to verify their identity, along with clear labelling of whether or not accounts are verified, and enhanced options for users to limit or block interaction with non-verified accounts. 


Such a scheme would not remove the problem of fake accounts entirely, but it would make it much harder for networks of fake accounts to be deployed to manipulate political discourse, at scale. All users would be provided with a new, and absolutely crucial piece of information to  assess the trustworthiness of a source. Where a fact or an opinion was being offered and endorsed by accounts which had chosen not to be verified, users could bring their own judgement as to what this might mean for the reliability of the information or the authenticity of the opinion. In addition, users would have the option of avoiding non-verified accounts  entirely, giving users who were concerned about disinformation a new and very easy-to-use tool to  protect themselves. Platforms’ recommender algorithms would be less vulnerable to manipulation from fake engagement, because they would be able to distinguish between  engagement by verified and non-verified accounts, and take steps to  investigate any suspicious discrepancies in engagement from these two  categories.  


Our submission to this EU consultation focuses on how optional user identity verification would help disrupt disinformation and influence operations, and therefore protect the integrity of elections. But such a measure would also help reduce the wide range of other harmful and illegal online activities which can be enabled by fake accounts - ranging from fraud to hate speech and gender-based violence. We provided an overview of these wider benefits in our submission to Ofcom as part of their recent consultation on the illegal harms codes of practice under the UK’s Online Safety Regime. These other benefits may be outside the scope of the EU’s current consultation on election integrity - but they are very much within the scope of the broader DSA.


We hope that the EU Commission will give our proposal serious consideration. Whilst the measures in their current draft guidelines are sensible and will help, their effectiveness would be greatly enhanced by also tackling fake accounts as a core platform functionality which influence operations heavily rely on. We look forward to their response.


Comments


bottom of page