On 16 December last year, Ofcom published its Illegal Content Codes Of Practice. These followed on from the consultation on the drafts published in November 2023. On 17 March 2025, this version of the Codes will come into force. This means platforms will be required under the Online Safety Act to implement the measures which apply to them (or deploy “other effective measures to protect users from illegal content and activity”). Failure to do so will risk enforcement action from Ofcom. The publication was therefore an important milestone, which takes us a significant step closer to the Online Safety Act coming properly into force.
As we have learnt to expect, Ofcom’s publication ran to many hundreds of pages. There were 12 Volumes and Regulatory Documents, and a further 5 Annexes. It has taken some time to fully digest all the decisions, and Ofcom’s explanations for how it has taken them. But whilst we’re still teasing out some of the details, it seems clear that although the publication of the Codes is a welcome step towards the Online Safety Act starting to be enforced, the content is disappointing.
When Ofcom published its draft Codes for consultation over a year ago, Clean Up The Internet (along with many other organisations) argued that the proposed draft measures were insufficient to address the harms which Ofcom had identified in its own draft Risk Assessment. We highlighted that this was particularly problematic given that the Act offers platforms a “safe harbour” to platforms who follow the measures. We warned that if Ofcom didn’t strengthen the measures in the Codes, we would find ourselves in the absurd position of manifestly unsafe platforms being able to claim compliance with the Online Safety Act, because they’d followed the letter of the Codes. This clearly wasn’t the intention of legislators when they included the “Safe Harbour” provision. On the contrary, it was included on the assumption that Ofcom would propose measures sufficient to address the risks it had identified.
Ofcom’s failure to include a measure on user identity verification was a significant example of this flaw in its draft thinking. Sadly this has been carried through unaddressed into the version published in December. Ofcom’s Register of Risks (p462) correctly lists “CSEA (grooming), harassment/stalking/threats/abuse, controlling or coercive behaviour, proceeds of crime, fraud and financial services and foreign interference offences” as “key kinds of harm” linked to fake user accounts. It correctly lists “CSEA (CSAM), encouraging or assisting suicide, hate and harassment/stalking/threats/ abuse, Terrorism, foreign interference, drugs and psychoactive substances, firearms, knives and other weapons offences” as “key kinds of harm” linked to anonymous accounts. Yet the Codes of Practice do not propose measures to address these risks from fake and anonymous accounts. Compliance with the OSA will not require a platform to introduce any verification measures to restrict the ability of bad actors to create and operate fake or anonymous accounts, or to help other users to identify or avoid such accounts.
The absence of such verification measures is compounded by the confirmed delay until 2026 in the consultation on implementation of “phase 3” of the Online Safety Act. This includes the User Identity Verification duty and other measures specifically for Category One platforms. Ofcom has repeatedly cited the User Identity Verification Duty for Category One platforms as an excuse for not considering a measure in the Illegal Content Codes. We have always argued it is not consistent with Ofcom’s duties to refuse to consider a measure for the Illegal Content Codes, simply because there is a requirement elsewhere in the Act for a related measure to be implemented for a subset of platforms. The year’s delay makes this position even less tenable, and even more harmful to UK users.
In our meetings with Ofcom over the past year, the regulator sought to manage our expectations. It indicated we should expect minimal change between the 2023 consultation draft and the final Codes. Ofcom argued that making significant changes to what was in the draft - such as the inclusion of any identity verification measures - would require further consultation, and thus delay implementation and enforcement of the Codes. It argued that it was therefore better to get a first version of the Codes implemented and enforcement under way, and then to “iterate up”. It promised further consultations on future improvements in the near future.
Yet, we note that in some important respects, the Codes have been changed between the 2023 draft and the recently published version - to be watered down. The most significant example of this which we have identified so far concerns measure ICU 2, regarding having a content moderation function that allows for the swift take down of illegal content. Between the 2023 consultation draft, and the published 2024 version, a significant caveat has been added. The new 2024 version of the measure offers platforms an exemption where it is “currently not technically feasible for them to achieve this outcome”.
This change seems to us to be very significant. It creates a major loophole, and risks a perverse incentive for platforms to implement measures which make it “not technically feasible” to monitor or take down illegal content such as CSAM. Contrary to the OSA’s stated aim of promoting “safety by design”, this new version of the measure has the potential to encourage “not technically feasible to take down legal content, by design”. The Marie Collins Foundation, which focuses on stopping technology-assisted child sexual abuse, has branded this “the most troubling aspect of what has been released by Ofcom” and says that “the sense of disappointment in these measures is palpable across the sector of those working to counter online risks and enhance safety online, particularly for children”.
It’s not immediately clear to us why Ofcom felt that it would not be possible to amend its draft Codes to strengthen them to tackle risks from fake and anonymous accounts without necessitating a further round of consultation and delay - and yet felt that it was possible to to amend the draft Codes to significantly dilute a measure intended to protect users from serious harm such as CSAM. We note that in both cases, the beneficiaries of Ofcom’s decision-making appear to be the platforms. We also note that in its justification, in Annex 1( page 137), for delaying verification measures until they are introduced “as part of our Phase 3 work on the user identity verification guidance”, Ofcom acknowledges arguments made “several stakeholders” to include it in the Illegal Content Codes, and names 7 of these stakeholders. Whereas in the case of the decision to water down ICU 2 (Vol 2, page 26), Ofcom claims its decision was influenced by a “small number” of stakeholders - but provided details only of Meta-owned WhatsApp.
We’ve written to Ofcom requesting more information regarding the decision-making processes which led to both these decisions. We hope this will shed some light on Ofcom’s contrasting levels of willingness to change the draft Codes.
Comments