The relationship between online anonymity and abuse on social media has been back in the spotlight this week, due to the appalling racist abuse directed at premier league footballers including Marcus Rashford, Axel Tuanzebe and Anthony Martial.
Manchester United highlighted the role that anonymity appeared to play, branding the perpetrators "anonymous mindless idiots". Their manager Ole Gunmar Solskjaer described how they “hide behind anonymous social media accounts”. The BBC’s technology editor went so far as to ask, “Is ending anonymity on social media the answer?”
We believe that there are downsides to “ending anonymity” completely - as we explain here, there are circumstances where the ability to use social media anonymously can be important for freedom of expression. However, it is also very clear that anonymous social media accounts fuel a lot of abuse and disinformation - we've previous explored the evidence for this in some detail e.g. here (re: abuse) and here (re: disinformation). We therefore recommend action focused on preventing these negative and harmful abuses of anonymity, whilst protecting its legitimate uses. The government’s planned Online Safety Bill needs to include powers for the new regulator, Ofcom, to require Social Media Platforms to act to reduce the harm caused by risk factors such as anonymity.
We believe regulation should focus on the desired outcome (in this case, that social media companies should have to demonstrate that they’ve acted to reduce the levels of abuse and disinformation from anonymous accounts) rather than stipulating precisely how it should be done. However, it is reasonable to assume that improved approaches to user verification will be an important component of such action.
But how could improved user verification on social media work in practice? Are there technical solutions already out there? How could we ensure that no user is excluded through not having the right documents? What about protecting users’ privacy? We’ve produced a new briefing addressing these questions, and explaining how optional user verification, combined with improved transparency and giving users more powers to manage their level of interaction with unverified users, could substantially reduce the harm caused by anonymous abuse and disinformation, without any kind of “ban on anonymity”.
You can read it here. Comments and feedback welcome!