Clean up the Internet’s thinking about online anonymity, and the role its misuse plays in undermining online discourse, is informed by a wide range of academic research. Below is a non-exhaustive list of academic works exploring the relationship between anonymity, online disinhibition, and online harms such as trolling, abuse, and bullying. We make an attempt to summarise each piece, and include some short quotes.
We first published this list in October 2019, and it was last updated in January 2022. We’d hugely welcome other relevant research being brought to our attention.
Please see also our companion piece covering research relating to anonymity, inauthenticity, and misinformation/disinformation.
Where a full version of the article is available for free online, we include a direct link. Where the article is paywalled, we include the Digital Object Identfier.
We’d hugely welcome other relevant research being brought to our attention.
“The Online Disinhibition Effect”
John Suler
CyberPsychology & Behavior 7, no. 3 (2004): 321
This oft-cited article is an influential attempt to theorise “Online Disinhibition”, or the phenomenon of people feeling able to behave and interact differently from behind their keyboards.
Suler draws an important distinction between “benign online disinhibition” and “toxic online disinhibition”. Benign forms of disinhibition enable an individual to explore and share thoughts, emotions, and ideas in ways which have a positive or even therapeutic effect. An example might be an individual feeling able to explore an aspect of sexual identity free from the judgement of family members. Toxic forms of disinhibition involve an individual feeling able to misbehave, and able to avoid responsibility for their misbehaviour. Examples would be trolling, bullying and abuse.
Suler identifies anonymity as “one of the principle factors” behind online disinhibition.
"This anonymity is one of the principle factors that creates the disinhibition effect. When people have the opportunity to separate their actions online from their in-person lifestyle and identity, they feel less vulnerable about self-disclosing and acting out. Whatever they say or do can’t be directly linked to the rest of their lives. In a process of dissociation, they don’t have to own their behavior by acknowledging it within the full context of an integrated online/offline identity. The online self becomes a compartmentalized self. In the case of expressed hostilities or other deviant actions, the person can avert responsibility for those behaviors, almost as if superego restrictions and moral cognitive processes have been temporarily suspended from the online psyche. In fact, people might even convince themselves that those online behaviors 'aren’t me at all.'”
Lapidot-Lefler, N.|Barak, A. "Effects of Anonymity, Invisibility, and Lack of Eye-contact on Toxic Online Disinhibition." Computers in Human Behavior. 28.2 (2012): 434-43.
This study, conducted in Israel, explored the role of anonymity in “flaming” behaviour over Microsoft messenger. It sought to break online anonymity down into constituent parts and explore the relationship between them: non-disclosure of personal details, invisibility, and absence of eye-contact. Different paired conversations were set up over msn messenger with different variables e.g. cameras positioned in different places, personal details shared or concealed.
It found that single biggest factor increasing the likelihood of flaming was a lack of eye contact. They also found that not being identified by personal detail made the effects of lack of eye contact worse. The authors conclude that the experience of “anonymity online”, which leads to toxic disinhibition, is made up of a few factors, which they suggest calling an “online sense of unidentifiability”.
“To reconcile the prevailing definition of the general and quite obscure term of ‘‘anonymity’’ as used in the context of virtual reality, a more comprehensive definition is apparently required. The present findings suggest that one can think of anonymity as an assemblage of different levels of online unidentifiability, in which non-disclosure of personal details, invisibility, and absence of eye-contact compose the most significant assemblage; these components appear to combine in different degrees, thus yielding a variety of ‘‘anonymities.’’ The new concept we refer to—online sense of unidentifiability— can be understood as spanning a range, in which three major factors are considered: one end of this range is characterized by a lack of personal information (i.e., anonymity), lack of visibility, and lack of eye-contact; the other end, by disclosure of personal data, visibility, and eye-contact.
"The current findings suggest that previous definitions of anonymity did not take into account all the factors that characterize the online communication environment, specifically invisibility and absence of eye-contact. Thus, it seems advisable that future studies define the online social setting carefully and precisely so that the effects of anonymity on the behavior of communicants in cyberspace can be evaluated alongside the effects of other online situational variables. First and foremost, it is advisable that the presence of eye-contact (or its absence) between communicants be assessed in future studies of online disinhibition. It also appears that the term anonymity, as we know it, has not yet been adapted to the parameters of the new virtual reality. Henceforth, studies that include the anonymity variable should consider the broader definition the online sense of unidentifiability: non-disclosure of personal details, invisibility, and absence of eye-contact.”
“Virtuous or Vitriolic: The effect of anonymity on civility in online newspaper reader comment boards”
Santana, Arthur D.
Journalism practice -- Routledge -- Volume: 8 1; (pages 18-33) -- 2014
This study compares the levels of incivility in online comments about articles relating to immigration across a range of newspaper websites. It investigates whether there is a difference in levels of incivility where newspapers allow anonymous users to post comments, compared to those using a Facebook plugin to require users to share their Facebook identity.
The study found a significant difference between the levels of incivility, including expressions of racism, on sites where total anonymity was permitted and those requiring Facebook verification.
“Anonymous commenters were significantly more likely to register their opinion in with an uncivil comment than non-anonymous commenters. Just over 53% of the anonymous comments were uncivil, while 28.7% of the non-anonymous comments were uncivil.”
The study noted that the Facebook plugin was not the most rigorous form of identity verification – at this time around 8% of Facebook accounts were thought to be inauthentic or using a false name. The study also did not find that the restrictions on anonymity afforded by the Facebook plugin came anywhere close to eliminating anonymity altogether.
“While most of the comments in the non-anonymous forums were civil, meaning that removing anonymity was a successful strategy for cutting down on the level on uncivil comment, it by no means eliminated incivility altogether.
The study concludes that restrictions on anonymity could play a significant role in tackling incivility in online newspaper forums.
“The ways people express themselves online is significantly dependent on whether their true identity is intact, suggesting not just a correlation between anonymity and incivility but also causation. As such, commenting forums of newspaper which disallow anonymity show more civility than those that allow it. These findings should be of interest to those newspapers that allow anonymity and that have expressed frustration with rampant incivility and ad hominem attacks in their commenting forums”
Digital Social Norm Enforcement: Online Firestorms in Social Media
Katja Rost, Lea Stahel, Bruno S. Frey
PLOS One: June 17, 2016
This study looks at comments left on petitions on a German petition platform, openpetition.de, which offers a service similar to change.org in the UK or USA.
It introduces social norm theory in attempt to understand online aggression in a social-political online setting, "challenging the popular assumption that online anonymity is one of the principle factors that promotes aggression".
Their analysis of over half a million comments left on 1,612 petitions over 3 years, suggest that non-anonymous individuals are more aggressive compared to anonymous individuals.
They argue that this is because a major motivation of uncivil commenters is a "sanctioning of (perceived) norm violations". The commenters therefore see themselves as "norm-enforcers", are deliberately and intentionally using strong language, and wish to attach their name to it to add weight to the sanction.
"According to social norm theory, in social media, individuals mostly use aggressive word-of-mouth propagation to criticize the behavior of public actors. As people enforce social norms and promote public goods, it is most likely that they perceive the behavior of the accused public actors as driven by lower-order moral ideals and principles while that they perceive their own behavior as driven by higher-order moral ideals and principles. From this point of view there is no need to hide their identity."
The authors note some of the ways in which an online petition platform may not be typical of other social media interactions. In particular, they note that it is a specifically "social-political online setting", that the commenters are "intrinsically motivated", that the petitions are “protests” and the signers could equally be described as "protesters".
It's worth noting that no distinction is made in this study between uncivil comments made about a third party petition “target” (e.g. a politician who is perceived to be corrupt), who is not present in the conversation and presumably not expected to read it, and uncivil comments directed at other participants with a possibility that they will read it and be affected by it. Indeed, the comments cited in this study as examples of incivility and aggression are criticisms *about* politicians whom the commenter disapproves of, but are not directed *at* them:
"Exemplarily, we present three of the most aggressive comments by non-anonymous commenters: “Silly, fake, inhuman and degrading, racist, defamatory and ugly theses like those of Sarrazin (author's note: a former German politician) have no place in this world, let alone in the SPD (author's note: Social democratic party). Sarrazin certainly has no business in the Social democratic party and should try his luck with the Nazis” (ID352216); “HC Strache (author's note: Austrian politician) has an evil, inhuman character, he lies and tries to persuade other people of wrong ideas.” (ID284846); “These authorities are mostly no people, but §§§- and regulatory machines! I detest authorities–with my 67 years’ life experience after all!” (ID418089)."
Perpetuating online sexism offline: Anonymity, interactivity, and the effects of sexist hashtags on social media
Fox, Jesse; Cruz, Carlos; Lee, Ji Young
Computers in human behavior. Volume 52 (2015); pp 436-442 -- Elsevier
This experiment investigated whether users’ anonymity and level of interactivity with sexist content on social media influenced sexist attitudes and offline behaviour. Participants were given either a Twitter account that was anonymous or one that had personally identifying details. They were told they were taking part in an experiment about online humour and were asked to share or write posts including a number which incorporated a sexist hashtag. Anonymous participants were found to be more willing to share and compose sexist material.
After exposure, participants completed two purportedly unrelated tasks, to test whether their online behaviour had any lasting impact on their attitudes - a survey and a job hiring simulation in which they evaluated male and female candidates’ resumés. Anonymous participants reported greater hostile sexism after tweeting than non-anonymous participants. Participants who composed sexist tweets reported greater hostile sexism and ranked female job candidates as less competent than those who retweeted, although this did not significantly affect their likelihood to hire.
“Given the recent attention to the exclusion of women and sexual harassment online, it is interesting to note that our findings demonstrate that anonymity promotes higher levels of sexism. In a misogynistic online climate, anonymity may be the disinhibiting factor that leads to harassment and even the extreme death threats directed at prominent women in tech such as Anita Sarkeesian, Brianna Wu, and Zoe Quinn. Further our findings indicate that what happens online does not stay online. Anonymous online sexism is harmful not just in the moment of engagement, but also leads to more sexist attitudes afterwards than when the user is identifiable”
Civility and trust on Social media
Angelo Antoci, Laura Bonelli, Fabio Paglieri, Tommaso Reggiani, Fabio Sabatini
Journal of Economic Behavior & Organization
Volume 160, April 2019, Pages 83-99
Also summarised helpfully by two of the study authors in this much shorter blog post:
This experiment investigated the impact that participants’ experiences of civility and incivility, in an online discussion on a controversial topic, had upon their subsequent behaviour and levels of trust. One group were given four genuine threads of uncivil discussion to engage with. Another group read similar threads in which uncivil discussions had been replaced with polite interactions on the same topics. A third control group was exposed to content on the same themes, but in the form of short news excerpts and without social interaction. Participants were then required to play a trust game to measure their levels of trust.
The experiment found that levels of distrust were similarly low amongst those exposed to the uncivil content and the control group. Those who had been exposed to the polite content, on the other hand, displayed markedly higher levels of trust. The experimenters suggest (somewhat depressingly) that this means incivility and distrust are the norm when it comes to social media content, but (more encouragingly) that relatively little initial exposure to more civil content can have a significant impact on users’ subsequent behaviour and attitudes.
From their summary blog post:
"The striking result of our study is that even minimal exposure to the opposite trend, i.e. civil online interaction, has a significant effect on social trust. This suggests that what is at stake in moderating online discussion is not simply the prevention of negative phenomena (hate speech, cyberbullying, digital harassment, etc.), but also the achievement of significant social benefits, most notably a measurable increase in trust and social capital that can, in turn, positively affect economic development.
"The take-home message for policy makers is rather straightforward: instead of focusing exclusively on fighting against noxious online behaviour, we should also create the preconditions to promote civil discussion on online platforms. While the former goal needs to be pursued via strict regulations, the latter is best fostered by carefully designing (or tweaking) the platforms themselves.
"In doing so, freedom of expression needs to be preserved for all users, which is why we are sceptical that legal prohibitions alone can act as a panacea against online incivility. Instead, we should strive to effect a paradigm shift in what kind of interactions social media afford to their users: today uncivil and shallow confrontation is the norm, with civil debate relegated to being the exception; tomorrow it should be the other way around, thanks to social networking platforms designed to encourage reflective discourse, unbiased assessment of evidence, and open-minded belief change – while still leaving people free to verbally attack each other, should they feel so inclined."
“I did it for the Lulz”: How the dark personality predicts online disinhibition and aggressive online behavior in adolescence
Anna Kurek, Paul E. Jose, Jaimee Stuart
Computers in human behavior. Volume 98 (2019); pp 31-40 -- Elsevier
This study of adolescents in New Zealand investigates the relationship between pre-existing personality traits, online disinhibition, and what it terms “cyber agression”.
It finds that teenagers with “dark personality traits” (sadism, psychopathy, narcissism) are particularly susceptible to the effects of online disinhibition. Online disinhibition makes them more likely to manifest these negative personality traits.
"These results suggest, as Suler (2004) noted, that the underlying mechanisms of inhibited or disinhibited behaviour lie fundamentally within the processes of personality dynamics, and consistent with this view, the present findings provide empirical evidence that certain individuals may be at an increased risk of disinhibited behaviour online.
"Some youth, whose basic needs are grounded in dark motives, when exposed to a vastly uncontrolled and unmonitored space like the Internet, may gravitate towards the unhealthy development of disinhibited actions and attitudes
"Several significant results were found, namely that all three dark personality traits, as well as adolescent false self perceptions, were significantly and positively associated with increased online disinhibition. In addition, while only sadistic traits and online disinhibition were found to be significant direct predictors of cyber aggression, several indirect effects were also discovered, namely that all three dark traits became predictive of cyber aggression through the indirect role of increased disinhibition. Additionally, both narcissistic and psychopathic tendencies indirectly predicted cyber aggression through the mediation of both false self perceptions and online disinhibition."
Anonymity, Membership-Length and Postage Frequency as Predictors of Extremist Language and Behaviour among Twitter Users
Hollie Sutch & Pelham Carter
International Journal of Cyber Criminology, Vol 13 (December 2019)
This study seeks “to ascertain whether anonymity, membership length and postage frequency are predictors of online extremism”. It conducts two forms of analysis on a sample of over 100,000 tweets - a corpus linguistic analysis which looks for extreme language associated with Islam and Islamophobia, and a content analysis looking for four types of "extreme behaviour" - ("extreme pro-social", "extreme anti-social", "extreme anti-social prejudicial biases" and "extreme radical behaviours").
The researchers find that high levels of anonymity was a strong predictor of both extreme language and extreme negative behaviours. It also found that high levels of identifiability correlated with higher levels of extreme pro-social behaviour.
“As hypothesised, the present research found that Twitter accounts who have high anonymity, where the number of identifiable items is low, are significantly more associated with extreme words - compared to other levels of anonymity. These results coincide with wider literature which elucidates how levels of anonymity can influence how an individual will behave online In particular it provides evidence to support theoretical claims which suggest how a user’s level of anonymity can influence them to propagate extreme narratives online.”
“This research, like hypothesised, demonstrated that a lower number of identifiable items significantly predicted higher levels of extreme anti-social behaviour, extreme antisocial prejudicial biases and extreme radical behaviour. Wider literature concurs with these findings stating how anonymity increases an individual’s polarisation towards extreme positions”
“As hypothesised for the extreme pro-social predictor variable a higher number of identifiable items significantly predicted higher levels of extreme pro-social behaviour. These findings coincide with literature stating that personalisation online helps to facilitate pro-social behaviour”
The researchers scored the anonymity of a Twitter account according to the number of "identifiable items" associated with it - such as full name, a potentially identifiable profile picture, a specific location, and any additional links to social media profiles or personal information (like a date of birth). It’s worth noting that the researchers were not able to assess the authenticity of this information, so there must have been potential for an inauthentic account, presenting false identifiable attributes, to get a low score for anonymity.
The research found that low membership length (which they defined as two to 23 months active) was also a predictor or extremist language and one of the four types of extreme behaviour (extreme anti-social behaviour), and low postage frequency also (which they defined as an average number of tweets per month between ten and 209) was a predictor of extremist language but not of any type of extreme behaviour.
The prevalence and impact of online trolling of UK members of parliament
S Akhtar, CM Morrison
Computers in Human Behavior, 2019
This study analyses the results of a survey, conducted between February and April 2018 and completed by 181 UK MPs, about their experiences and the impact of being trolled. All MPs who responded to the survey had experienced trolling and many were trolled multiple times a day, and reported that the principle platforms for this abuse were Twitter and Facebook.
The researchers find that whilst all responding MPs experienced trolling, patterns of trolling varied between male and female MPs. Male MPs reported receiving more abuse, but that most of this abuse was focused on criticism of their political positions and professional role. Female MPs reported receiving lower amounts of abuse, but “a greater variety of forms of abuse, with the majority of abuse being personal in nature, e.g., sexual abuse”. Unsurprisingly given the greater amount of personal abuse and threats which female MPs report receiving, they also report greater emotional impact including fear for their personal safety.
75% of MPs reported that the amount of abuse they had received had increased over the preceding two years. 43.7% of the reported abuse was described as from anonymous accounts, and 52.7% of the abuse was from named individuals. MPs of all genders reported that where the perpetrator was known, in the overwhelming majority of instances (93.7%) the abuse was sent from male perpetrators.
“Our study revealed that female MPs suffered more emotional stress and damage to their reputation. Thus to conclude on gender differences, our study is the first to show different patterns of trolling in males and females MPs. Male MPs reported more concern about reputational damage, and females more concern about their personal safety. Moreover, the impact of this trolling seemed to have a greater effect on females MPs compared to male MPs”
“Our research highlights that a significant number of MPs (mostly female) are left feeling emotionally and psychologically concerned as a result of social media trolling. This issue needs to be addressed, with more help for MPs, and others with a public profile, in order to cope with inevitable online social media trolling.”
Influencers, Amplifiers, and Icons: A Systematic Approach to Understanding the Roles of Islamophobic Actors on Twitter Lawrence Pintak, Brian J. Bowe, and Jonathan Albright
Journalism & Mass Communication Quarterly, July 2021 This study analyses the anti-Muslim/anti-immigrant Twitter discourse surrounding Ilhan Omar, who successfully ran for Congress in the 2018 US midterm elections.
The research examines the clusters of accounts posting tweets that contained
Islamophobic or xenophobic language or other forms of hate speech regarding Omar and her candidacy. They identify three categories of Twitter accounts - “Influencers”, “Amplifiers”, and “Icons” - in the propagation of Islamophobic rhetoric and explores their respective
Roles.
“Influencer” accounts were defined as those linked to the anti-Omar Islamophobic/hate speech content which were scored highly by the PageRank algorithm, a link analysis algorithm widely used to assess the influence of webpages. “Amplifier Accounts” were defined as those who ranked highly when measured by weighted-out degree, i.e. by the sum of their retweets, replies, tags and mentions which linked Islamophobic/hateful content back to Omar. “Icons” were defined as accounts with the most followers, generally high profile figures e.g. celebrities, politicians, sport stars, or accounts linked to major news organisations.
The researchers found that “Influencer” accounts were generally authentic and identifiable. The top influencer accounts helped shape the discourse, producing large quantity of original material. For example the account of a professional conservative provocateur, @LauraLoomer, “dominated the Islamophobic Twitter narrative around Omar” and “seeded the narrative with posts that were widely retweeted.”
The most significant “Amplifier” accounts, on the other hand, were found to be mostly inauthentic. Of the top 40 Amplifiers spreading Islamophobic/xenophobic messages linked to Omar’s election campaign network, the researchers determine that only 11 were authentic accounts.
The "Icon" accounts had an impact on the discourse through the size of their follower account, despite a very low number of tweets about Omar. The researchers conclude that they "played virtually no role in the overarching anti-Muslim narrative of these two candidates".
In other words, the Islamophobic/xenophobic discourse was largely driven by a "handful of Influencers— in this case, agents provocateurs— [who] were responsible for authoring, or giving initial impetus to, the majority of the offensive tweets", who were mainly not anonymous. This information was "then relayed to the broader Twitter universe by a larger, but still finite, network of Amplifiers, many of which were either identified as a form of bot or showed signs of the kind of “coordinated inauthentic activity” that characterise bots."
These inauthentic accounts represent hidden forces, which have a real effect on the discourse, serving as automated megaphones that, in the case of anti-Muslim and xenophobic hate speech, transform the Twitter “dialogue” into a one-way monologue of hate. Together, these shadowy accounts function to poison the political narrative, drawing in both likeminded and unsuspecting individuals who retweeted their posts, disproportionately amplifying—and, for some, normalizing—the message of intolerance
Just because a large proportion of the tweets studied here were artificial does not mean they were inconsequential. Rather, they played an important role in distorting online civic discourse, in part when journalists and interested members of the public interacted with this material.
Well researched and written! Glad to read what was happening behind the shocking headlines
https://psychologyunfolded.com/what-is-disinhibition