top of page

Academic research about anonymity, inauthenticity, and misinformation

  • Writer: David Babbs
    David Babbs
  • Dec 18, 2019
  • 20 min read

Updated: Sep 16

Clean up the Internet’s thinking about online anonymity, and the role its misuse plays in undermining online discourse, is informed by a wide range of academic research. Below is a non-exhaustive list of academic works exploring the relationship between anonymity, inauthentic accounts, lack of effective verification, and misinformation and disinformation. We make an attempt to summarise each piece, and include some short quotes.


Where a full version of the article is available for free online, we include a direct link. Where the article is pay-walled, we include the Digital Object Identfier.


We first published this list in October 2019, and it was last updated in January 2022. We’d hugely welcome other relevant research being brought to our attention.


Please see also our companion piece covering research relating to anonymity, online disinhibition, abuse, incivility and trolling.



Cloaked Facebook pages: Exploring fake Islamist propaganda in social media

Johan Farkas, Jannick Schou, Christina Neumayer

New media & society. Volume 20:Number 5 (2018); pp 1850-1867

This research analyses “cloaked” Facebook pages, created to spread political propaganda by imitating the identity of a political opponent in order to spark hateful and aggressive reactions. It looks at Danish Facebook pages disguised as radical Islamist pages, which provoked racist and anti-Muslim reactions as well as negative sentiments towards refugees and immigrants in Denmark in general.

The “cloaked pages” were inauthentic pages purporting to be from radical islamists, but actually authored by islamophobic provocateurs. They issued such provocations as calling for sharia law, mocking danish customs, celebrating images of burning danish flag. An example of a fake post read:“We Muslims have come here to stay. WE have not come here for peace but to take over your shitty infidel country”. On Facebook itself, these posts were widely shared, generating significant outrage and provoking significant expressions of hostility towards muslims on Facebook. The pages also received national media coverage, and provoked an expression of outrage from a danish member of parliament.


“Although all news media questioned the authorship of the Facebook pages, most Facebook users who shared or commented on these pages posts assume the originators were radical Islamists”.
“A key strategy for disseminating hate speech online is to hide or disguise the underlying intentions - both to avoid detection and appeal to a large audience”.
“The cloaked Facebook pages became sites of aggressive posting and reaction through comments, producing a spectacle of hostility. The page administrators created this hostility through new aggressive posts, and users maintained and reproduced this hostility through their reactions. The user-generated stream of information was based on aggressive and violent disinformation through the cloaked Facebook pages and fueled antagonistic reactions, contributing to neo-racism in Denmark."


News sharing on UK Social Media - misinformation, disinformation and correction survey report

Andrew Chadwick and Cristian Vaccari

University of Loughborough, 2019

The focus of this report is the habits and attitudes of UK social media users in relation to misinformation, based on public opinion research conducted by Opinium.

Some striking results include:

  • 42.8 percent of news sharers admit to sharing inaccurate or false news

  • 17.3 percent admit to sharing news they thought was made up when they shared it. These users are more likely to be male, younger, and more interested in politics.

  • A substantial amount of the sharing on social media of inaccurate or made up news goes unchallenged. Fewer social media users (33.8 percent) report being corrected by other social media users than admit to sharing false or exaggerated news (42.8 percent). And 26.4 percent of those who shared inaccurate or made up news were not corrected.

  • Those who share news on social media are mainly motivated to inform others and express their feelings, but more civically-ambivalent motivations also play an important role. For example, almost a fifth of news sharers (18.7 percent) see upsetting others as an important motivation when they share news.


The authors note that the behaviour of sharing an inaccurate piece of content online occurs in the same disinhibited context as other forms of social media interaction:


“In social media interactions, anonymity or pseudonymity are widespread, or people use their real identities but have weak or no social ties with many of those with whom they discuss politics. As a result, when interacting on social media, people are generally more likely to question authority, disclose more information, and worry less about facing reprisals for their behaviour. The fact that many social media users feel less bounded by authority structures and reprisals does not necessarily lead to democratically undesirable interactions. Social media environments encourage the expression of legitimate but underrepresented views and the airing of grievances that are not addressed by existing communicative structures. However, social media may afford a political communication environment in which it is easier than ever to circulate ideas, and signal behavioural norms, that may, depending on the specific context, undermine the relational bonds required for tolerance and trust."

Suspicious Election Campaign Activity on Facebook: How a Large Network of Suspicious Accounts Promoted Alternative Für Deutschland in the 2019 EU Parliamentary Elections

Trevor Davis, Steven Livingston, and Matt Hindman

George Washington University, 2019


This report contains a detailed analysis of the ways in which networks of suspicious facebook accounts promoted German Far-Right party Alternative Für Deutschland during the May 2019 EU parliamentary elections. It identifies extensive use of apparently inauthentic accounts, at a point when Facebook had claimed that this problem had been addressed.

The authors identify two distinct networks of inauthentic accounts. The first was used to create a false impression of credibility for AfD pages by artificially boosting their followers. Of the second, they write:

"The second network we identified is more concerning. It is a network comprised of highly active accounts operating in concert to consistently promote AfD content. We found over 80,000 active promotional accounts with at least three suspicious features. Such a network would be expensive to acquire and require considerable skill to operate. These accounts have dense networks of co-followership, and they consistently “like” the same sets of individual AfD posts. Many of the accounts identified share similar suspicious features, such as two-letter first and last names. They like the same sets of pages and posts. It is possible that this is a single, centrally controlled network. Rates of activity observed were high but not impossible to achieve without automation. A dexterous and determined activist could systematically like several hundred AfD posts in a day. It is less plausible that an individual would do so every day, often upvoting both original postings of an image and each repost across dozens of other pages. This seems even less likely when the profile’s last recorded action was a post in Arabic or Russian. Additionally, we found thousands of accounts which: - Liked hundreds of posts from over fifty distinct AfD Facebook pages in a single week in each of ten consecutive weeks. - Liked hundreds of AfD posts per week from pages they do not follow. Automated accounts are the most likely explanation for these patterns. The current market price of an account that can be controlled in this way is between $8 and $150 each, with more valuable accounts more closely resembling real users. In addition to supply fluctuations, account price varies according to whether the account is curated for the country and whether they are maintained with a geographically specific set of IP addresses, if they have a phone number attached to them (delivered with the account), and the age of the account (older is more valuable). Even if the identified accounts represented the entire promotional, purchasing this level of synthetic activity would cost more than a million dollars at current rates. Data collection from Facebook is limited, making it difficult to estimate the size of the network or the scale of the problem. Accounts in our dataset had persisted for at least a year."

“THE RUSSIANS ARE HACKING MY BRAIN!” investigating Russia's internet research agency twitter tactics during the 2016 United States presidential campaign

Darren L. Linvill,Brandon C. Boatwright,Will J. Grant,Patrick L. Warren

Computers in Human Behavior Volume 99, October 2019, Pages 292-300

This is a detailed study of the methods employed by the “Internet Research Agency”, an apparent arm of the Russian state, during the 2016 US presidential election. It describes the extensive use of false identities and anonymous accounts to disseminate disinformation. They detail fake accounts, run out of Russia, which purported to be local news sources with handles like @OnlineMemphis and @TodayPittsburgh. Others purported to be local republican-leaning US citizens, with handles like @AmelieBaldwin and @LeroyLovesUSA, and yet others claimed to be members of the #BlackLivesMatter movement with handles such as @Blacktivist.


“Here we have demonstrated how tools employed by a foreign government actively worked to subvert and undermine authentic public agenda-building efforts by engaged publics. Accounts disguised as U.S. citizens infiltrated normal political conversations and inserted false, misleading, or sensationalized information. These practices create an existential threat to the very democratic ideals that grant the electorate confidence in the political process."
"Our findings suggest that this state-sponsored public agenda building attempted to achieve those effects prior to the 2016 U.S. Presidential election in two ways. First, the IRA destabilized authentic political discourse and focused support on one candidate in favor of another and, as their predecessors had done historically, worked to support a politically preferred candidate (Shane & Mazzetti, 2018, pp. 1–11). Second, the IRA worked to delegitimize knowledge. Just as the KGB historically spread conspiracies regarding the Kennedy assassination and the AIDS epidemic, our findings support previous research (Broniatowski et al., 2018) that IRA messaging attempted to undermine scientific consensus, civil institutions, and the trustworthiness of the media. These attacks could have the potential for societal damage well beyond any single political campaign."

Falling Behind: How social media companies are failing to combat inauthentic behaviour online

Sebastian Bay and Rolf Fredheim NATO Strategic Communications Centre of Excellence (NATO STRATCOM)

November 2019

This report details a successful attempt by researchers to purchase inauthentic social media activity. The experiment was conducted between May and August 2019, and tested the platforms’ various claims that inauthenticity is largely a historical problem which they have now tackled. The authors conclude that the platforms’ claims to have tackled inauthentic activity have been exaggerated and that independent regulation is required. They write:

"To test the ability of Social Media Companies to identify and remove manipulation, we bought engagement on 105 different posts on Facebook, Instagram, Twitter, and YouTube using 11 Russian and 5 European (1 Polish, 2 German, 1 French, 1 Italian) social media manipulation service providers. At a cost of just 300 EUR, we bought 3 530 comments, 25 750 likes, 20 000 views, and 5 100 followers. By studying the accounts that delivered the purchased manipulation, we were able to identify 18 739 accounts used to manipulate social media platforms. In a test of the platforms’ ability to independently detect misuse, we found that four weeks after purchase, 4 in 5 of the bought inauthentic engagements were still online. We further tested the platforms ability to respond to user feedback by reporting a sample of the fake accounts. Three weeks after reporting more than 95% of the reported accounts were still active online Most of the inauthentic accounts we monitored remained active throughout the experiment. This means that malicious activity conducted by other actors using the same services and the same accounts also went unnoticed. While we did identify political manipulation—as many as four out of five accounts used for manipulation on Facebook had been used to engage with political content to some extent—we assess that more than 90% of purchased engagements on social media are used for commercial purposes. Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behaviour on their platforms. Self-regulation is not working. The manipulation industry is growing year by year. We see no sign that it is becoming substantially more expensive or more difficult to conduct widespread social media manipulation. In contrast with the reports presented by the social media companies themselves, our report presents a different perspective: We were easily able to buy more than 54 000 inauthentic social media interactions with little or no resistance. Although the fight against online disinformation and coordinated inauthentic behaviour is far from over, an important finding of our experiment is that the different platforms aren’t equally bad—in fact, some are significantly better at identifying and removing manipulative accounts and activities than others. Investment, resources, and determination make a difference. Recommendations: -Setting new standards and requiring reporting based on more meaningful criteria -Establishing independent and well-resourced oversight of the social media platforms -Increasing the transparency of the social media platforms -Regulating the market for social media manipulation


#IStandWithDan versus #DictatorDan: the polarised dynamics of Twitter discussions about Victoria’s COVID-19 restrictions

Timothy Graham, Axel Bruns , Daniel Angus, Edward Hurcombe and Sam Hames

Media International Australia 2021, Vol. 179(1) 127–148


This research by Australian academics looks at two interrelated hashtag campaigns targeting the Victorian State Premier, Daniel Andrews of the Australian Labor Party, regarding the Victorian State Government’s handling of the COVID-19 pandemic in mid-to-late 2020. They examine how a small number of hyper-partisan pro- and anti-government campaigners were able to mobilise ad hoc communities on Twitter and influence the broader debate.


The researchers examine 396,983 tweets sent by 40,203 accounts between 1 March 2020 to 25 September 2020 containing the hashtags “#IStandWithDan”, “#DictatorDan” or “#DanLiedPeopleDied”. This included a qualitative content analysis of the top 50 most active accounts (by tweet frequency) for each of the three hashtags, including an attempt to determine which accounts represented real, authentic users, and which were “sockpuppets” accounts, which they define as “an account with anonymous and/or clearly fabricated profile details, where the actor(s) controlling the account are not identifiable.”


The researchers identify significant numbers of “sockpuppets” across the debate, with 54% of the top 50 accounts (by tweet frequency) using the anti-government hashtags identified as “sockpuppets”, and 34% of those using the pro-government hashtag. In the case of the anti-government hashtag campaign, they find “evidence that the broader adoption and dissemination of language targeting Andrews is driven at least in part by coordinated and apparently inauthentic activity that amplifies the visibility of such language before it is adopted by genuine Twitter users.”


Notably, the researchers identify “only a vanishingly small number of likely bots”, which they define as “entirely automated” accounts. In other words, the inauthentic amplifying they identify is driven by real human beings exploiting the platforms’ ability to create anonymous accounts.


“A more likely explanation, and one also in keeping with our observations of the greater percentage of fabricated sockpuppet profiles among the most active accounts in the anti-Dan hashtags, is that the fringe activists promoting the #DictatorDan and #DanLiedPeopleDied hashtags have engaged in the deliberate creation of new, ‘fake’ accounts that are designed to generate the impression of greater popular support for their political agenda than actually exists in the Victorian population (or at least in its representation on Twitter), and to use these fabricated accounts to fool Twitter’s trending topic algorithms into giving their hashtags greater visibility on the platform. By contrast, the general absence of such practices means that #IStandWithDan activity is a more authentic expression of Twitter users’ sentiment.”

Overall, then, the flow patterns we observe with the anti-Dan hashtags should more properly be described as follows: - An undercurrent of antipathy towards the pandemic lockdown measures circulates on Twitter; - Mainstream and especially conservative news media cover the actions of the Victorian state government from a critical perspective; - Some such reporting is used by anti-Andrews activists on Twitter to sharpen their attacks against Andrews (see, for example, the Yemini tweet shown in Figure 4), but in doing so, they also draw on pre-existing memes and rhetoric from other sources (including the Sinophobic #ChinaLiedPeopleDied), and adapt these to the local situation; - Such rhetoric is circulated by ordinary users and their hyper-partisan opinion leaders on Twitter, amplified by spam-like tweeting behaviours and purpose-created sockpuppet accounts, and aggregated by using anti-Dan hashtags such as #DictatorDan and #DanLiedPeopleDied as a rallying point; - This content is in turn directed at news media, journalists, and politicians (as Table 1 shows) in the hope that it may find sympathy and endorsement, in the form of retweets on Twitter itself or take-up in their own activities outside of the platform (including MP Tim Smith’s Twitter poll, in Figure 1); - And such take-up in turn encourages further engagement in anti-Dan hashtags on Twitter, repeatedly also pushing them into the Australian trending topics list”


Influencers, Amplifiers, and Icons: A Systematic Approach to Understanding the Roles of Islamophobic Actors on Twitter Lawrence Pintak, Brian J. Bowe, and Jonathan Albright

Journalism & Mass Communication Quarterly, July 2021 https://doi.org/10.1177%2F10776990211031567 This study analyses the anti-Muslim/anti-immigrant Twitter discourse surrounding Ilhan Omar, who successfully ran for Congress in the 2018 US midterm elections.


The research examines the clusters of accounts posting tweets that contained

Islamophobic or xenophobic language or other forms of hate speech regarding Omar and her candidacy. They identify three categories of Twitter accounts - “Influencers”, “Amplifiers”, and “Icons” - in the propagation of Islamophobic rhetoric and explores their respective

Roles.


“Influencer” accounts were defined as those linked to the anti-Omar Islamophobic/hate speech content which were scored highly by the PageRank algorithm, a link analysis algorithm widely used to assess the influence of webpages. “Amplifier Accounts” were defined as those who ranked highly when measured by weighted-out degree, i.e. by the sum of their retweets, replies, tags and mentions which linked Islamophobic/hateful content back to Omar. “Icons” were defined as accounts with the most followers, generally high profile figures e.g. celebrities, politicians, sport stars, or accounts linked to major news organisations.


The researchers found that “Influencer” accounts were generally authentic and identifiable. The top influencer accounts helped shape the discourse, producing large quantity of original material. For example the account of a professional conservative provocateur, @LauraLoomer, “dominated the Islamophobic Twitter narrative around Omar” and “seeded the narrative with posts that were widely retweeted.”


The most significant “Amplifier” accounts, on the other hand, were found to be mostly inauthentic. Of the top 40 Amplifiers spreading Islamophobic/xenophobic messages linked to Omar’s election campaign network, the researchers determine that only 11 were authentic accounts.


The "Icon" accounts had an impact on the discourse through the size of their follower account, despite a very low number of tweets about Omar. The researchers conclude that they "played virtually no role in the overarching anti-Muslim narrative of these two candidates".


In other words, the Islamophobic/xenophobic discourse was largely driven by a "handful of Influencers— in this case, agents provocateurs— [who] were responsible for authoring, or giving initial impetus to, the majority of the offensive tweets", who were mainly not anonymous. This information was "then relayed to the broader Twitter universe by a larger, but still finite, network of Amplifiers, many of which were either identified as a form of bot or showed signs of the kind of “coordinated inauthentic activity” that characterise bots."


"These inauthentic accounts represent hidden forces, which have a real effect on the discourse, serving as automated megaphones that, in the case of anti-Muslim and xenophobic hate speech, transform the Twitter “dialogue” into a one-way monologue of hate. Together, these shadowy accounts function to poison the political narrative, drawing in both likeminded and unsuspecting individuals who retweeted their posts, disproportionately amplifying—and, for some, normalizing—the message of intolerance"

"Just because a large proportion of the tweets studied here were artificial does not mean they were inconsequential. Rather, they played an important role in distorting online civic discourse, in part when journalists and interested members of the public interacted with this material."

Social Media and Democracy - The state of the field, prospects for reform Edited by Nathaniel Persily, Stanford University, California, Joshua A. Tucker, New York University Cambridge University Press, August 2020 PDF of full book available here: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E79E2BBF03C18C3A56A5CC393698F117/9781108835558AR.pdf/Social_Media_and_Democracy.pdf


This book provides an important overview of the field, and Chapter 2 ( Misinformation, Disinformation, and Online Propaganda - Andrew M. Guess and Benjamin A. Lyons) and Chapter 5 (Bots and Computational Propaganda: Automation for Communication and Control - Samuel C. Woolley) are particularly relevant.


The importance of fake and anonymous accounts is a key theme of this work. As Guess and Lyons note in Chapter 2, "those who intend to mislead others also tend to mask their identity". Fake accounts are used both for the supply of misinformation, and in its dissemination and spread. Chapter 5 sets out some of the key ways in which bot accounts are used for computational propoganda. These include:

  • "Manufacturing consensus" during election cycles - using inauthentic engagement during elections to give "voters the impression that the campaigns had large scale online grassroots support, to plant ideas in the news cycle, and to effect trends on digital platforms"

  • "State-sponsored trolling" - using anonymous and fake accounts to produce large amounts of content "aimed at attacking political opposition"

  • "Influencing online conversation" in "other topical areas" - for example promoting anti-vaccine arguments and distributing "misinformation and rumors during natural disasters and terrorist attacks"


The chapter concludes by highlighting the limitations of reactive and software focused approaches to tackling bots. It argues that societal and regulatory interventions are also required:

Software solutions, no matter how sophisticated the technology, can only mitigate a portion of the problems intrinsic to computational propaganda. Social solutions must be implemented as well.


How online misinformation works: a costly signalling perspective

Neri Marsili

in Misinformation and Other Epistemic Pathologies

Mihaela Popa-Wyatt (ed)

Cambridge University Press (2025) https://doi.org/10.48550/arXiv.2506.17158


This chapter from a forthcoming book applies "costly signalling theory" (CST), a framework taken from evolutionary biology, to online misinformation, showing how reputational incentives (or lack thereof) affect the sharing of true versus false content. Marsili identifies the ways in which specific features and functionalities (which she terms "communicative affordances") of social media platforms, such as anonymity and pseudonymity, alter framework of costs and incentives that would typically discourage individuals from deceitful communication.


The study identifies several such communicative affordances which are common features of online spaces: reposting (which offers plausible deniability and low cost), gamified exchanges (e.g., like/retweet counts incentivising popularity over truth), information overload (impairing user vigilance and enforcement of norms), and the opacity of sources (due to anonymity, bots, or pseudonyms). These attributes together lower the “cost” of spreading misinformation while diminishing the reputational risks associated with being caught lying.


This analytical approach is helpful in understanding the potential for successful platform design interventions to reduce the incentives to share misinformation by raising the cost in terms of effort or reputational risk. For example X's introduction of extra friction before reposting in the form of a challenge to read the article fist can be understood as increasing the effort (cost) of reposting. It highlights Wikipedia as offering users a very different set of communicative affordances, where a community of vigilant users and strong social incentives help maintain reliability.


Anonymous and pseudonymous accounts are highlighted as a major factor in reshaping the reputational mechanisms, which according to costly signalling theory help sustain honest communication in other contexts. These reputational mechanisms rely on recipients of information being able to keep track of the record of other sources, and that sources have a strong interest in maintaining a good reputation. On social media, however, users are confronted with a plethora of "opaque sources", and these reputational incentives break down:


Anonymous sources are, by definition, sources whose identity we cannot easily identify; their reliability cannot be inferred from their reputational track record. Second, anonymous communicators don’t have to worry about staining their reputation: they don’t pay the usual costs associated with sharing false information (like compromising their credibility, or losing social standing). Therefore, reputational costs can hardly motivate anonymous speakers to be truthful.

Use of anonymous and pseudonymous communication to avoid the reputational costs of spreading falsehood long predates the internet. However, Marsili argues digital technologies such as social media "supercharge our ability to do so":

Online communication differs in the ease with which anonymity can be achieved, and in how common it is. Since anonymous speakers don’t have to worry about reputational costs, their increased prevalence, in turn, facilitates the production and circulation of misinformation online

A lot of social media accounts are more accurately characterised as "pseudonymous" rather than truly anonymous. Marsili acknowledges that compared to total anonymity "pseudonymity doesn’t fully evade the costs associated with communication, nor does it leave an information-seeker completely clueless about the track record of the communicator". However, pseudonyms still "allow users to insulate reputational costs within a specific conversational domain", avoiding any risk to real-world reputation. Additionally, and crucially:

pseudonymity makes it easy to misrepresent one’s identity, by pretending to be someone or by claiming properties one doesn’t possess. This hinders effective source monitoring and facilitates misinformation spread, as many ordinary ways of assessing reliability and honesty (e.g. information about education, gender, personal motivations, etc.) can easily be fabricated online.

This framework for understanding anonymity and pseudonymity as "communicative affordances" which stem from the design of social media platforms is helpful when considering design interventions to mitigate the potentially harmful impacts of these affordances. Wikipedia is highlighted as successfully mitigating many of the risks to reliability associated with anonymous and pseudonymous contributions through "strong community oversight and incentives for reliable contributions".


Marsili's framework helps explain how "user verification can help users discern between pseudonymous and real sources, improving collective vigilance", but also helps explain why the design of the user verification system is crucial to ensure it provides the right incentives. Elon Musk's monetisation on blue ticks on X (critiqued elsewhere on this blog), is offered as an example of how not to do it:

When Elon Musk began selling verification on X, users impersonating companies and politicians began proliferating, spreading misinformation and causing economic disruption. This illustrates the importance of design interventions (such as user verification) that facilitate, rather than disrupt, the userbase’s ability to assess source reliability in spite of the widespread presence of opaque sources.



Communication Rights for Social Bots?: Options for the Governance of Automated Computer-Generated Online Identities

Stefano Pedrazzi; Franziska Oehmer

Journal of Information Policy (2020) 10: 549–581.


This article examines the democratic risks posed by social bots—automated accounts that imitate human users on social media—and explores governance options to address them while respecting constitutional protections for freedom of expression. The authors define social bots as software agents that produce and distribute content, interact with humans and other bots, and often operate under the false pretense of human identity. They review evidence of their use to spread disinformation, distort popularity metrics, and influence opinion‑formation in contexts including elections, referendums, and polarised debates on issues such as migration or vaccination.


Pedrazzi and Oehmer identify three specific, interlocking problem areas associated with social bots:

  1. The dissemination of illegal content and disinformation

  2. Under the false pretense of of human identity

  3. By means of a potentially unlimited number of communication and networking activities


The paper stresses that responsibility for the harmful impacts is diffuse—what they call the “problem of many hands”. Responsible actors include "legislators who consciously or unconsciously fail to adopt standards against malicious bot activities, operators of social networks (including their management, programmers, etc.) whose network infrastructure and terms of use allow or even promote malignant bot activities, and social network users who intentionally or negligently disseminate or endorse such activities."


The paper notes the challenges around the unambiguous definition and identification of bots, including the false pretense and imitation of human behaviour, but also a lack of consensus over what degree of automation should be considered to make an account a bot. It offers the following definition:

Social bots are computer algorithms that automatically produce and distribute content in social networks or online forums and interact with human users, as well as other bots, thereby imitating human identity or behaviour, in order to (possibly) influence opinions or behaviour. Social bots operate on behalf of individual or collective actors, for example, for political or commercial motives. They are often associated and investigated in connection with the deception of users, the spread of rumors, defamations, or disinformation, but they can also be employed for nonmalicious purposes when, for example, they automatically aggregate and disseminate content from different sources.

Helpfully, the paper locates the activities and impact of bots "in interaction with characteristics of the network architecture and the algorithmic selection logic of platforms". It identifies three key ways in which they have an effect:


1) Automation - which enables a scale of activity "restricted only by physical limits to the transmission and computational processing of signals, and by the design and architecture of an online environment"


2) Distortion of popularity indicators - the ability to feign human identity, combined with the scope for unlimited communication and interaction afforded by automation, to "distort popularity indicators such as number of followers, likes or retweets of people, topics, or positions". This has an impact on human users, with social bot's able to "influence the extent of cognitive engagement with and the persuasiveness of content" and creating false impressions of consensus, "spirals of silence" around (purported) minority views. It also has an impact on "the algorithmic selection logic of social networks", with automated interaction from bots making content "more likely to be recommended and displayed to users, which again can affect their reception and behavior".


3) Reach extension - as well as distorting how accounts, perspectives, and content are perceived, social bots are able to increase the reach of such accounts, perspectives and content. "By deceiving and targeting human users and supported by recommendation algorithms, they can achieve a faster and more effective diffusion of content."


Pedrazzi and Oehmer argue for a range of governance interventions to tackle the malign influence of social bots whilst preserving the benefits of automation and minimising risks to democratic discourse. They highlight the potential of what they term "bot disclosure laws", such as the draft U.S. Bot Disclosure and Accountability Act of 2019 or the Medienstaatsvertrag der Länder in Germany.


They explore the potential of user verification as a means of distinguishing bot accounts in some detail.


The verification process would have to be designed to protect the anonymity of users (even in authoritarian regimes), especially with regard to possible repression, but also to ensure that no discrimination in access, for example, due to missing documents, is implemented. A possible solution for this might be a blockchain-based self-sovereign identity approach,104 which leaves the sovereignty over one's own data to the user and allows him or her to decide which service provider gets access to what data.
For example, a social media platform would simply receive information from a certified digital identity (e-ID) that an applicant is a natural or legal person and whether a possible quota of permitted profiles has already been exhausted. In addition, further elements such as name or picture could be verified at the request of a user, which could be of particular importance for persons in public life. In addition to simply indicating that a profile is verified, this information could further be used to display the ratio between verified and nonverified followers or friends in the case of profiles or, in the case of content, the relationship between verified and nonverified profiles that retweet or like it. This would at the same time increase transparency with regard to popularity indicators. However, it can be assumed that the acceptance and effect of such indications would be based on the perception of their validity, which in turn would have consequences, for example, with regard to the selection and credibility attribution of sources and content, similar to those discussed for warnings in connection with false content.
Authentication of human users can also be done technologically using challenge–response techniques such as captcha tests, although progress in machine learning and AI has resulted in such tests, as currently designed, being increasingly mastered by software. Such authentication could be limited, for example, to profiles that prefer to remain anonymous or unverified or where the determined value for the automation probability exceeds a certain value. This would prevent users from having to face such tests permanently. In addition, verified profiles could be assigned more relevance for recommendations, rankings or the identification of emerging issues by means of algorithmic procedures.

Join our mailing list

Your data will be held in accordance with our privacy policy

©2020 Clean Up The Internet. 

bottom of page