top of page
Search
  • Writer's pictureDavid Babbs

Ofcom illegal harms consultation - some initial reflections

Ofcom launched its consultation on its proposed regime to tackle illegal harms in November 2023, with a response deadline of 23 February. Given it is 1900 pages long, it has taken some time to digest it. As we embark on the process of drafting a formal response, we are sharing some provisional reflections. We would welcome comments and feedback.


The illegal harms consultation comes in six volumes with twelve accompanying annexes. For our work on the harms associated with anonymous, pseudonymous, and inauthentic social media accounts, the volumes which are of most direct interest are:

  • Volume 2 - The causes and impacts of online harm

  • Volume 3 - How should services assess the risk of online harms?

  • Volume 4 - What should services do to mitigate the risk of online harms?

  • Annex 7: Illegal Content Codes of Practice for user-to-user services


Volume 2 works its way through different “priority offences”, as designated by the OSA, identifying “risk factors” which enable those offences to proliferate. “Anonymous user profiles”, “Fake user profiles”, and “Multiple user profiles” are identified as relevant risk factors in a wide range of different forms of illegal content/activity occurring on social media: Terrorism; Grooming; CSAM; Encouraging or assisting suicide or self-harm; Harassment, stalking and threats; Hate Offences; Controlling or Coercive behaviour; Drugs and psychoactive substances offences; Sexual exploitation of adults; Intimate image abuse; Proceeds of Crime Offences; Fraud Offences; Foreign Interference; False Communication; Epilepsy Trolling; Cyberflashing.


In addition to mentioning anonymous, fake, and multiple profiles against a significant number of specific offences, “Pseudonymity and anonymity” is highlighted in the introduction, identified alongside end to end encryption, live streaming, and recommender systems as 1 of 4 functionalities which “stand out as posing particular risks”.


Pseudonymity and anonymity: There is some evidence that pseudonymity (where a person’s identity is hidden from others through the use of aliases) and anonymity can embolden offenders to engage in a number of harmful behaviours with reduced fear of the consequences. For example, while the evidence is contested, some studies suggest that pseudonymity and anonymity can embolden people to commit hate speech. At the same time, cases of harassment and stalking often involve perpetrators creating multiple fake user profiles to contact individuals against their will and to circumvent blocking and moderation

We’d add some other offences to this overview of the key harms associated with pseudonymity and multiple fake profiles - particularly fraud, given that the scale of fraud on social media is so very vast, and so very reliant on fake social media accounts. 


It’s also a little odd for Ofcom to describe the evidence on the online disinhibition effect on social media as “contested”. There is a significant range of work exploring the links between anonymity/pseudonymity and abuse and harassment. We’re aware of only one study which disputes the existence of such a link, and that examined comments on a German petition site rather than a user-to-user social media platform. 


Nonetheless, this is a welcome recognition how the cluster of platform features which enable anonymous/pseudonymous accounts, fake accounts, and the operation of multiple profiles, are significant enablers of a range of harms.


Ofcom correctly qualifies this analysis by observing that these 4 “stand out” functionalities “are not inherently bad and have important benefits”. It notes that “Pseudonymity and anonymity can allow people to express themselves and engage freely online. In particular, anonymity can be important for historically marginalised groups such as members of the LGBTQ+ community who wish to talk openly about their sexuality or explore gender identity without fear of discrimination or harassment.” We have been explicit from the start of our campaign that we agree with this point - although it would perhaps be helpful, in reaching a balanced and proportionate assessment, for Ofcom to also note here that those same groups often also face a disproportionate amount of hate speech and harassment from anonymous and fake accounts.


Ofcom then goes on to state that the aim is “not to restrict or prohibit the use of such functionalities, but rather to get services to put in place safeguards which allow users to enjoy the benefits they bring while managing the risks appropriately”[our emphasis]. This highlights, for us, the most fundamental test of what Ofcom is proposing. Ofcom provides, in Volume 2, a lengthy, fairly comprehensive assessment of the ways in which anonymous, pseudonymous and fake accounts are a risk factor for the OSA’s priority offences. But do the recommendations which Ofcom then goes on to recommend add up to a set of “safeguards” which will ensure that platforms are indeed “managing the risks appropriately”? Or to put it even more bluntly: if platforms do what Ofcom is proposing to require them to do, but no more, will users experience a significant enough reduction in their exposure to illegal harms?


Ofcom’s proposals for these “safeguards” are set out in Volume 3, which sets out Ofcom’s proposed expectations for platforms’ risk assessment processes, and Volume 4, which sets out Ofcom’s proposed recommendations for the measures which platforms should take. Crucially, if a platform follows all the relevant recommendations set out in Volume 4, then Ofcom states that it will consider them to be complying with their illegal content duties:


Services that decide to implement measures recommended to them for the kinds of illegal harms and their size or level of risk indicated in our Codes of Practice will be treated as complying with the relevant duty. This means that Ofcom will not take enforcement action against them for breach of that duty.

This closely follows the language of s41(1) of the Act, which states that a provider “is to be treated as complying with a relevant duty if the provider takes or uses the measures described in a code of practice which are recommended for the purpose of compliance with the duty in question”. This “safe harbour” provision means the effectiveness of the measures which Ofcom recommends is critical. They need to be sufficiently demanding to deliver the improvement in online safety which is the purpose of the Act. It is manifestly not the intent of S41(1) for a platform with a continued high prevalence of illegal harm to enjoy a “safe harbour”, simply because they’ve followed an insufficiently stringent set of measures. 


In other words, s41(1) requires Ofcom to be confident that its measures are sufficiently stringent to fulfil the Act’s stated purpose of ensuring that companies “identify, mitigate and manage the risks of harm” and that their platforms are “safe by design”. It has to be obvious that the “safe harbour” was only intended by the legislators to be available in circumstances in which the measures and recommendations clearly satisfied the aims of the legislation in terms of reducing harms.


Sadly, we are concerned that Ofcom’s preliminary set of proposals fail to meet this test. We find it hard to imagine the combined impact of their recommendations being a huge reduction in illegal content, or a significant improvement in users’ experience. There seem to be yawning gaps between Ofcom’s analysis of the problems (Volume 2), Ofcom’s proposed solutions (Volumes 3 and 4), and the purpose of the Act or the UK government’s oft-stated objective of making the UK “the safest place in the world to be online”. 


Ofcom has indicated, in both the consultation and in other communications, that it sees its proposed measures as a first iteration, which it expects to expand on in the coming years through quite frequent iterations. But whilst we agree with an iterative approach, we do not think it justifies Ofcom’s decision to set the bar so very low in its first iteration. Ofcom appears to intend to “iterate up”, from this low bar – and has therefore erred on the side of not recommending measures which have the potential to prevent harm where it perceives there to be evidential gaps, or where it perceives uncertainty as to the capacity of companies to comply with a measure. 


This approach risks prioritising the interests of regulated companies over users. We are not sure which part of the Act Ofcom considers to require it to steer it towards applying a precautionary principle to avoid inconveniencing companies, rather than applying a precautionary principle to protect users from illegal harms. Our view is that it is clear from the legislation itself, and the parliamentary discussions surrounding, it that user safety was expected to take precedence, erring on the side of recommending what appears necessary to tackle illegal harms, but with the ability to relax measures should they later prove to be unnecessary or have been overtaken by developments.


There are some additional generally applicable recommendations about platforms’ designs, systems, and functionalities, which Ofcom has failed to consider, or include in its codes. Recommending platforms provide users with optional user verification is one such measure, which we will explore in more detail below. But in addition we wonder, given the broad definition of “measure” given in sections 10(4) and 236(1), whether Ofcom could have framed some of its recommendations to put more onus on platforms to take their own share of responsibility for identifying and implementing specific steps to reduce specific risks on their specific platform. Surely “measures” could include processes which Ofcom requires a platform to follow where the prevalence of a priority harm on their platform remains above a specified threshold, to identify additional steps they should take to reduce that risk?


A closer look at Fraud, one of the Act’s “priority offences”, offers a helpful illustration of the unacceptable gap between the harms Ofcom identifies in Volume 2 and the measures it is currently proposing in Volume 4. Our own research has identified the hugely significant role which anonymous and fake accounts play in enabling fraud. We were unable to identify a form of fraud on social media which does not in some way make use of fake accounts. Ofcom’s own analysis in Volume 2 appears to agree with this: having listed a number of different kinds of fraud which occur on social media platforms, Ofcom notes that “although we identified impersonation fraud specifically, impersonation can be used by fraudsters as a tactic in all of the other examples listed” [our emphasis]. 


Ofcom accepts that fraud is “the most commonly experienced illegal harm”, which causes users “significant financial and psychological harm”. Given the immense scale of this problem, and the key role which fake accounts play in enabling it, one might expect the “safeguards” which Ofcom recommends to ensure platforms are “managing the risk” to be fairly extensive. Yet, Ofcom’s recommendations only relate to “notable” and “monetised” verification schemes, with the aim of reducing “the specific risks of harm arising from notable user impersonation”. In and of itself, this recommendation is welcome - indeed, we highlighted some of the risks associated with a conflation of “monetised” and “notable” verification schemes, when “Twitter Blue” was first launched. But Ofcom acknowledges itself that this recommendation will not address the majority of ways in which fake accounts are exploited to perpetrate fraud:


“We recognise that impersonation is a factor in a much broader range of harms such as romance fraud, fraud on online marketplaces and in the sharing of disinformation online. However, this proposed measure is targeted to address a particular way in which impersonation manifests, and at this stage, we have focused on remedies for impersonation of notable users for the reasons described above. We note that other types of services (such as marketplaces or dating services) also operate forms of verification schemes and we would like to expand our policy thinking as our evidence base on illegal harms and remedies grows”


Ofcom does not offer any detailed explanation of why they have limited their recommendation in this way “at this stage”, but they do include in Volume 4 an exploration of a much more general recommendation on mandatory user verification. Clean Up The Internet has never called for a mandatory user verification measure, and as we set out in more detail below, we were surprised to see Ofcom focus on this option for detailed exploration whilst failing to consider other options which have fewer trade-offs and more widespread support. Nonetheless we read this section closely as it provides interesting insights into Ofcom’s current thinking on verification, and their approach to considering whether or not to make a recommendation.


Ofcom’s exploration in Volume 4 includes a welcome acknowledgement of the wide range of harms where anonymity, pseudonymity, fake profiles, and the ability to create multiple profiles is a risk factor and where verification could mitigate the risks. They accept that “given the broad range of illegal harms of which anonymity increases the risk, any interference could be said to be in pursuit of several aims, including the prevention of disorder or crime, the protection of the rights and freedoms of others, and the interests of national security.” They also, quite reasonably, highlight some of the arguments made by those who raise concerns about measures which would restrict anonymity and/or argue against mandatory verification. 


It is perhaps inevitable that in a consultation of this size, some of the reasoning in certain volumes is not always carried across and fully fleshed out as to the implications for other parts. But we do have serious concerns that the referencing of the section in making the case for the benefits of anonymity online and the rights implication is somewhat thin, and fails in our view to recognise the weight of evidence that anonymous and fake accounts cause harm, or the scale of that harm and the numbers of users impacted negatively by it. We would like to see greater evaluation, and referencing, of assertions about the benefits of anonymity (“Whistle-blowers, journalists, and activists are often highlighted as those benefiting from anonymity to carry out their work.”). We would like Ofcom to balance its observation that anonymity can benefit minority groups, with a recognition that those same minority groups are also disproportionately impacted by many of the harms which anonymity enables. Most concerning of all, Ofcom cites, uncritically, discredited “evidence” from Twitter obfuscating the role of anonymous accounts in the abuse of Black English footballers following the Euro 2022 final.  We called out Twitter on that blatant deception at the time, and have never had a response.  It is inexplicable that in the consultation, evidence that verification can help safeguard users and prevent illegal behaviour appears subject to a contrastingly high burden of proof, rejected because it “it is difficult to disentangle the effect of verification from other measures implemented”.


Ofcom concludes this exploration of mandatory verification by stating that it is “currently unable to assess the proportionality of a recommendation that services apply any sort of IDV measure to comply with the illegal content safety duty”. This appears to be the only instance of it being “unable to assess the proportionality” of a recommendation in the entire 1900 page consultation. It would be helpful if Ofcom were to set out more clearly here what would enable them to assess the proportionality, and what factors they consider to be relevant when assessing proportionality. Elsewhere in the consultation their focus appears to be quite strongly on costs to the platform, and here there is also some hint (in the references to whistleblowers, minority groups) that they are balancing rights of different user groups. It would be helpful for Ofcom to more clearly set out how it considered these factors against the potential efficacy of the measure under consideration, or scale of the harm, or costs of the harm.


Ofcom’s exploration, and rejection, of an IDV recommendation, also raises some interesting questions about the standards of proof which Ofcom feels it needs in order to make a recommendation, and whether these are being applied consistently. The OSA is a civil regime, and so it might be expected that a “balance of probabilities” approach might be followed. Ofcom has accepted that anonymous and fake accounts are a significant risk factor for a significant range of offences, and highlighted a number of cases where platforms have introduced verification to reduce harm - so there seems a strong argument that on the balance of probabilities recommending such a measure would make a difference. Yet evidence from Aylo (formerly Mindgeek) of “a relative 55% decrease in attempted violative content uploads since it introduced the uploader ID requirements” is challenged on the grounds that “it is difficult to disentangle the effect of verification from other measures implemented. For example, MindGeek’s transparency reports have outlined a range of technologies and policies introduced to deter harmful content alongside identity verification for uploaders.” This sounds like a rather higher burden of proof than “balance of probabilities”. It also seems somewhat inconsistent with the uncritical manner in which Twitter’s dishonest claims about the sources of abuse towards footballers are repeated. 


Having considered, albeit inconclusively, a recommendation on mandatory user verification, it is very odd that Ofcom fails to give any such consideration to a recommendation along the lines which Clean Up The Internet has long been suggesting: make it mandatory for platforms to offer their users an option to verify, alongside visibility of verification status and options to filter out non-verified users. Notwithstanding our questions about exactly how Ofcom attempted to assess the proportionality of a potential recommendation on mandatory verification, we are confident that any equivalent assessment of an optional verification scheme raises far fewer trade-offs in terms of freedom of expression or privacy. Had such a measure been considered properly, we suspect Ofcom would have been able to assess the proportionality and, given the significant potential safety benefits and far more minimal potential implications for privacy or freedom of expression, would have found the balance of arguments in favour of such a scheme quite compelling.


Ofcom’s peculiar justification for not considering inclusion of this measure in its recommendations, as a way of managing the risks of illegal harms being enabled by fake or anonymous accounts, is that a measure of this sort will eventually be required of “Category 1” platforms under their additional “User Empowerment” and “User Identity Verification” duties. The OSA’s “user identity verification” and “user empowerment duties” for Category 1, platforms will, as Ofcom puts it here, “require designated services to provide features specifically for adult users around legal content” [their emphasis]. According to Ofcom’s current timetable its consultation on those measures will have to wait until early 2025. We have concerns about the length of this wait. But more importantly, it seems to us illogical - and inconsistent with Ofcom’s duties under section 41(3) of the Act - to cite the possible future existence of these additional duties for Category 1 platforms regarding legal content as a justification for failing to even consider a recommendation on optional user verification as a measure to reduce illegal harms on wider range of platforms.


Ofcom should surely give due consideration to any measure which could safeguard users from illegal harms on platforms where anonymous or fake accounts are a relevant risk factor. It makes no sense whatsoever to refuse to consider a potential risk mitigation measure for illegal content, simply because a similar measure will later be applied for a separate purpose, under a separate part of the Act. It clearly was not the intention of legislators, in introducing specific extra “user empowerment” duties for the largest, “Category 1” platforms, to preclude Ofcom from considering measures on their merits under the illegal content duties. Indeed, the very broad definition of “measures” offered in s10 explicitly includes “functionalities allowing users to control the content they encounter”, which is exactly what a measure combining optional verification with options to filter out non-verified accounts would be.


Confusingly, Ofcom has been willing to consider, and to recommend, some other measures which give users optional features to control who they interact with and whose content they see - despite such measures also overlapping with measures envisaged for the s15 user empowerment duties. For the purposes of its illegal content codes Ofcom chooses to call such measures “enhanced user controls”, but the measures described in the “enhanced user controls” chapter could equally be described as “control features” as per the language of the user empowerment duty. Alongside the very narrowly focused recommendation on notable and monetised verification schemes, this chapter recommends giving users “options to block or mute other user accounts on the service (whether or not they are connected on the service), and the option to block all non-connected users”. These blocking measures clearly overlap with the “control features” envisaged in s15(5) of the user empowerment duties. We would note in passing that the efficacy of both these recommended blocking measures is greatly weakened by a lack of any parallel measures to address fake accounts and multiple accounts - because a bad actor can easily bypass a user “block” by creating a new account or indeed dozens of such accounts. 


To conclude, so far Ofcom has done a much better job of analysing the risk factors which enable illegal harms to proliferate on social media than it has of making recommendations to prevent them. We’re not confident that Ofcom’s preliminary recommendations add up to appropriate “safeguards” to mitigate the role which anonymous and fake accounts play in enabling a wide range of illegal harms. If a platform follows Ofcom’s recommendations, then it enjoys certainty that Ofcom will not take enforcement action against it. There is much less certainty that if a platform follows Ofcom’s recommendations, its users will have a significantly safer experience. It seems clear to us that the safe harbour was only intended to be available where it was clear that the recommendations are robust enough to deal with the harms identified and we do not see in the legislation any provision that effectively allows Ofcom to grant exemptions from its scope.


Thankfully these recommendations are just drafts, and Ofcom has so far expressed a lot of willingness to listen to feedback. Ofcom has also emphasised repeatedly that it expects its list of recommendations to grow over time, for example as their information gathering powers enable them to gather more evidence to back up their recommendations. We are looking forward to encouraging them to follow up their strong analysis of the risk factors with some equally strong recommendations for how those risks can effectively be managed – and to do so without delay.


bottom of page