Search
  • David Babbs

Academic research about anonymity inauthenticity, and misinformation

Clean up the Internet’s thinking about online anonymity, and the role its misuse plays in undermining online discourse, is informed by a wide range of academic research. Below is a non-exhaustive list of academic works exploring the relationship between anonymity, inauthentic accounts, lack of effective verification, and misinformation and disinformation. We make an attempt to summarise each piece, and include some short quotes.

Where a full version of the article is available for free online, we include a direct link. Where the article is pay-walled, we include the Digital Object Identfier.


We’d hugely welcome other relevant research being brought to our attention.


Please see also our companion piece covering research relating to anonymity, online disinhibition, abuse, incivility and trolling.



Cloaked Facebook pages: Exploring fake Islamist propaganda in social media

Johan Farkas, Jannick Schou, Christina Neumayer

New media & society. Volume 20:Number 5 (2018); pp 1850-1867

https://doi.org/10.1177%2F1461444817707759

This research analyses “cloaked” Facebook pages, created to spread political propaganda by imitating the identity of a political opponent in order to spark hateful and aggressive reactions. It looks at Danish Facebook pages disguised as radical Islamist pages, which provoked racist and anti-Muslim reactions as well as negative sentiments towards refugees and immigrants in Denmark in general.

The “cloaked pages” were inauthentic pages purporting to be from radical islamists, but actually authored by islamophobic provocateurs. They issued such provocations as calling for sharia law, mocking danish customs, celebrating images of burning danish flag. An example of a fake post read:“We Muslims have come here to stay. WE have not come here for peace but to take over your shitty infidel country”. On Facebook itself, these posts were widely shared, generating significant outrage and provoking significant expressions of hostility towards muslims on Facebook. The pages also received national media coverage, and provoked an expression of outrage from a danish member of parliament.


“Although all news media questioned the authorship of the Facebook pages, most Facebook users who shared or commented on these pages posts assume the originators were radical Islamists”.
“A key strategy for disseminating hate speech online is to hide or disguise the underlying intentions - both to avoid detection and appeal to a large audience”.
“The cloaked Facebook pages became sites of aggressive posting and reaction through comments, producing a spectacle of hostility. The page administrators created this hostility through new aggressive posts, and users maintained and reproduced this hostility through their reactions. The user-generated stream of information was based on aggressive and violent disinformation through the cloaked Facebook pages and fueled antagonistic reactions, contributing to neo-racism in Denmark."


News sharing on UK Social Media - misinformation, disinformation and correction survey report

Andrew Chadwick and Cristian Vaccari

University of Loughborough, 2019

https://www.lboro.ac.uk/media/media/research/o3c/Chadwick%20Vaccari%20O3C-1%20News%20Sharing%20on%20UK%20Social%20Media.pdf

The focus of this report is the habits and attitudes of UK social media users in relation to misinformation, based on public opinion research conducted by Opinium.

Some striking results include:

  • 42.8 percent of news sharers admit to sharing inaccurate or false news

  • 17.3 percent admit to sharing news they thought was made up when they shared it. These users are more likely to be male, younger, and more interested in politics.

  • A substantial amount of the sharing on social media of inaccurate or made up news goes unchallenged. Fewer social media users (33.8 percent) report being corrected by other social media users than admit to sharing false or exaggerated news (42.8 percent). And 26.4 percent of those who shared inaccurate or made up news were not corrected.

  • Those who share news on social media are mainly motivated to inform others and express their feelings, but more civically-ambivalent motivations also play an important role. For example, almost a fifth of news sharers (18.7 percent) see upsetting others as an important motivation when they share news.


The authors note that the behaviour of sharing an inaccurate piece of content online occurs in the same disinhibited context as other forms of social media interaction:


“In social media interactions, anonymity or pseudonymity are widespread, or people use their real identities but have weak or no social ties with many of those with whom they discuss politics. As a result, when interacting on social media, people are generally more likely to question authority, disclose more information, and worry less about facing reprisals for their behaviour. The fact that many social media users feel less bounded by authority structures and reprisals does not necessarily lead to democratically undesirable interactions. Social media environments encourage the expression of legitimate but underrepresented views and the airing of grievances that are not addressed by existing communicative structures. However, social media may afford a political communication environment in which it is easier than ever to circulate ideas, and signal behavioural norms, that may, depending on the specific context, undermine the relational bonds required for tolerance and trust."

Suspicious Election Campaign Activity on Facebook: How a Large Network of Suspicious Accounts Promoted Alternative Für Deutschland in the 2019 EU Parliamentary Elections

Trevor Davis, Steven Livingston, and Matt Hindman

George Washington University, 2019

https://smpa.gwu.edu/sites/g/files/zaxdzs2046/f/2019-07-22%20-%20Suspicious%20Election%20Campaign%20Activity%20White%20Paper%20-%20Print%20Version%20-%20IDDP.pdf


This report contains a detailed analysis of the ways in which networks of suspicious facebook accounts promoted German Far-Right party Alternative Für Deutschland during the May 2019 EU parliamentary elections. It identifies extensive use of apparently inauthentic accounts, at a point when Facebook had claimed that this problem had been addressed.

The authors identify two distinct networks of inauthentic accounts. The first was used to create a false impression of credibility for AfD pages by artificially boosting their followers. Of the second, they write:

"The second network we identified is more concerning. It is a network comprised of highly active accounts operating in concert to consistently promote AfD content.
We found over 80,000 active promotional accounts with at least three suspicious features. Such a network would be expensive to acquire and require considerable skill to operate.
These accounts have dense networks of co-followership, and they consistently “like” the same sets of individual AfD posts.
Many of the accounts identified share similar suspicious features, such as two-letter first and last names. They like the same sets of pages and posts. It is possible that this is a single, centrally controlled network.
Rates of activity observed were high but not impossible to achieve without automation. A dexterous and determined activist could systematically like several hundred AfD posts in a day. It is less plausible that an individual would do so every day, often upvoting both original postings of an image and each repost across dozens of other pages. This seems even less likely when the profile’s last recorded action was a post in Arabic or Russian.
Additionally, we found thousands of accounts which:
- Liked hundreds of posts from over fifty distinct AfD Facebook pages in a single week in each of ten consecutive weeks.
- Liked hundreds of AfD posts per week from pages they do not follow. Automated accounts are the most likely explanation for these patterns. The current market price of an account that can be controlled in this way is between $8 and $150 each, with more valuable accounts more closely resembling real users. In addition to supply fluctuations, account price varies according to whether the account is curated for the country and whether they are maintained with a geographically specific set of IP addresses, if they have a phone number attached to them (delivered with the account), and the age of the account (older is more valuable).
Even if the identified accounts represented the entire promotional, purchasing this level of synthetic activity would cost more than a million dollars at current rates.
Data collection from Facebook is limited, making it difficult to estimate the size of the network or the scale of the problem. Accounts in our dataset had persisted for at least a year.

“THE RUSSIANS ARE HACKING MY BRAIN!” investigating Russia's internet research agency twitter tactics during the 2016 United States presidential campaign

Darren L. Linvill,Brandon C. Boatwright,Will J. Grant,Patrick L. Warren

Computers in Human Behavior Volume 99, October 2019, Pages 292-300

This is a detailed study of the methods employed by the “Internet Research Agency”, an apparent arm of the Russian state, during the 2016 US presidential election. It describes the extensive use of false identities and anonymous accounts to disseminate disinformation. They detail fake accounts, run out of Russia, which purported to be local news sources with handles like @OnlineMemphis and @TodayPittsburgh. Others purported to be local republican-leaning US citizens, with handles like @AmelieBaldwin and @LeroyLovesUSA, and yet others claimed to be members of the #BlackLivesMatter movement with handles such as @Blacktivist.


“Here we have demonstrated how tools employed by a foreign government actively worked to subvert and undermine authentic public agenda-building efforts by engaged publics. Accounts disguised as U.S. citizens infiltrated normal political conversations and inserted false, misleading, or sensationalized information. These practices create an existential threat to the very democratic ideals that grant the electorate confidence in the political process.
"Our findings suggest that this state-sponsored public agenda building attempted to achieve those effects prior to the 2016 U.S. Presidential election in two ways. First, the IRA destabilized authentic political discourse and focused support on one candidate in favor of another and, as their predecessors had done historically, worked to support a politically preferred candidate (Shane & Mazzetti, 2018, pp. 1–11). Second, the IRA worked to delegitimize knowledge. Just as the KGB historically spread conspiracies regarding the Kennedy assassination and the AIDS epidemic, our findings support previous research (Broniatowski et al., 2018) that IRA messaging attempted to undermine scientific consensus, civil institutions, and the trustworthiness of the media. These attacks could have the potential for societal damage well beyond any single political campaign."

Falling Behind: How social media companies are failing to combat inauthentic behaviour online

Sebastian Bay and Rolf Fredheim NATO Strategic Communications Centre of Excellence (NATO STRATCOM)

November 2019

This report details a successful attempt by researchers to purchase inauthentic social media activity. The experiment was conducted between May and August 2019, and tested the platforms’ various claims that inauthenticity is largely a historical problem which they have now tackled. The authors conclude that the platforms’ claims to have tackled inauthentic activity have been exaggerated and that independent regulation is required. They write:

"To test the ability of Social Media Companies to identify and remove manipulation, we bought engagement on 105 different posts on Facebook, Instagram, Twitter, and YouTube using 11 Russian and 5 European (1 Polish, 2 German, 1 French, 1 Italian) social media manipulation service providers.
At a cost of just 300 EUR, we bought 3 530 comments, 25 750 likes, 20 000 views, and 5 100 followers. By studying the accounts that delivered the purchased manipulation, we were able to identify 18 739 accounts used to manipulate social media platforms.
In a test of the platforms’ ability to independently detect misuse, we found that four weeks after purchase, 4 in 5 of the bought inauthentic engagements were still online. We further tested the platforms ability to respond to user feedback by reporting a sample of the fake accounts. Three weeks after reporting more than 95% of the reported accounts were still active online.
Most of the inauthentic accounts we monitored remained active throughout the experiment. This means that malicious activity conducted by other actors using the same services and the same accounts also went unnoticed.
While we did identify political manipulation—as many as four out of five accounts used for manipulation on Facebook had been used to engage with political content to some extent—we assess that more than 90% of purchased engagements on social media are used for commercial purposes.
Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behaviour on their platforms.
Self-regulation is not working. The manipulation industry is growing year by year. We see no sign that it is becoming substantially more expensive or more difficult to conduct widespread social media manipulation.
In contrast with the reports presented by the social media companies themselves, our report presents a different perspective: We were easily able to buy more than 54 000 inauthentic social media interactions with little or no resistance.
Although the fight against online disinformation and coordinated inauthentic behaviour is far from over, an important finding of our experiment is that the different platforms aren’t equally bad—in fact, some are significantly better at identifying and removing manipulative accounts and activities than others. Investment, resources, and determination make a difference.
Recommendations:
-Setting new standards and requiring reporting based on more meaningful criteria
-Establishing independent and well-resourced oversight of the social media platforms
-Increasing the transparency of the social media platforms
-Regulating the market for social media manipulation"

©2020 Clean Up The Internet.