In August, we wrote to Twitter requesting more information about claims they had made about racist abuse of England footballers on their platform. Twitter issued the claims in response to the widespread disgust at racist abuse directed at black England players on social media, following England’s defeat in the final of UEFA Euros. Criticism had been directed at a number of social media companies, including Twitter, for their failure to do more to prevent such abuse occurring on their platforms.
Twitter issued what it described as “analysis” of the tweets it had identified as abusive, and the associated accounts. This included a claim that “ID verification would have been unlikely to prevent the abuse from happening - as the accounts we suspended themselves were not anonymous. Of the permanently suspended accounts from the Tournament, 99% of account owners were identifiable”. This 99% claim was then republished uncritically by many news outlets.
Quite aside from being so high as to stretch credulity, the 99% figure contradicted the experiences described by footballers themselves, and reports from the FA, and the Football antiracism group Kick It Out as well as research conducted by Signify for the football authorities. It also didn’t fit with what we’d have expected to see given the considerable amount of research which we are aware of which has found strong links between higher levels of anonymity and higher levels of abuse on social media platforms including Twitter. We therefore wrote to Twitter in August asking it to provide details of how they arrived at this figure.
Twitter has never replied to us directly. However, some further details of how Twitter arrived at this figure have come to light, thanks to questioning by the Home Affairs Select Committee, and an intervention by Dame Margaret Hodge MP, who challenged Twitter over their failure to respond to our request and did eventually receive a written reply in November (which we were copied into).
Simon Fell MP, Conservative Party MP for Barrow and Furness, questioned Twitter’s Head of UK policy and philanthropy, when she appeared in front of the Home Affairs Select Committee on 8 September. He asked, referring to Twitter’s claims following the Euros, “what qualifies as anonymous?”.
The response given was that “99% of the accounts were identifiable. That means they have provided at least one, in most cases two, pieces of personal information”. She then defined these “pieces of personal information” as “full name, Date of Birth, email address, phone number” and stated that you ““have to verify to get on the service”.
Twitter’s reply to Margaret Hodge made similar claims, stating that "99% of account owners were identifiable, meaning a verified phone number or email address (or both) was associated with the account."
On close examination, these statements appear to confirm that Twitter chose to rely on a rather surprising definition of anonymity in order to make its claims that accounts involved in racist abuse of England footballers following the Euros were “not anonymous”.
Twitter has for a number of years asked new users to provide an email address, or a phone number, and then enter a code sent to that email or phone number, as part of the registration process. This confirms that the user has access to the email address, or is able to receive messages sent to the phone number - a process Twitter refers to as “verification”.
Twitter also asks users to provide their date of birth, and to enter a “name”. There is no process of “verification” associated with either of these pieces of information.
This means a new user only has to provide an email address or a phone number from which they are able to retrieve a code, which must then be entered during the account creation process. They can enter whatever name they choose, and the only limit on what they enter as a date of birth is that it must suggest they are over 13 years old.
It is very easy to create a “burner” email address with a totally fictitious name, which could be used for this purpose. Creating such a gmail address, for example, costs nothing and takes a couple of minutes. It is also quite easy to acquire a “burner” phone number - for example by buying a pay-as-you go SIM card, widely available in newsagents across the UK for 99p.
It appears therefore that in order to arrive at the claim that 99% of Twitter accounts associated with racist abuse following the Euros were “not anonymous”, Twitter adopted a definition of “not anonymous” which would include accounts using unverified names, unverified dates of birth, and email address or phone number which could have been created/acquired specifically for this purpose, contain no identifying information and be linked to no other account.
For example, Clean Up The Internet has created this account, with the name “Mickey Mouse”, the date of birth 9/11/2001, the email address email@example.com, and a number from a pay-as-you-go sim card bought from a newsagent for 99p, both of which have been used only for this purpose. Using this account we are free to reply to Marcus Rashford, one of the footballers who was targeted with abuse following the Euros.
We were also able to create an account using the name “Simon F Fell”, the date of birth 9/11/01, and the specially created email address firstname.lastname@example.org (no phone number at all this time) - and then direct tweets at Simon Fell MP, the MP who questioned Twitter’s claims in parliament.
To summarise, Twitter adopted a definition which enabled it to classify such Mickey Mouse accounts as “not anonymous”. It then doubled down on these claims in responses to parliamentarians, by seeking to present its account registration process as requiring and verifying “personal information”, when in reality no genuine “verification” takes place or is even attempted.
We therefore consider Twitter’s claims to have been highly misleading. We don’t think they shed any light on the true role played by anonymous accounts in the abuse of England footballers. Sadly they shed rather more light on the extent to which social media companies should be trusted to provide their own “analysis” of factors driving harm on their platforms.
Twitter concluded their August “analysis” by stating that “there is no place for racist abuse on Twitter, and we are determined to do all we can to stop these abhorrent views and behaviours from being seen on our platform”. It appears that “doing all they can” includes obfuscating the role which factors such as anonymity play in fuelling such abuse.
We have emailed Twitter, drawing their attention to this blog, and inviting a response. In the interests of transparency (something which we take seriously), we will provide an update should we receive any response.