top of page
  • Writer's pictureStephen Kinsella

Looking back on 2022

For the UK’s online safety agenda, 2022 was a year where much was discussed but not enough changed. In Westminster, there were numerous debates, committee hearings, and ministerial announcements. But social media platforms remain hospitable environments for abuse, disinformation and fraud, and seriously dysfunctional as virtual “public squares”. The Online Safety Bill still hasn’t been passed, and enforcement of its provisions is only a little closer than it appeared twelve months ago.

Instability in the governing Conservative Party was a major cause of delays this year, with changes of Prime Ministers and lead Ministers, derailed timelines, and the reopening of previously settled policy debates. Whilst the legislation stalled, evidence of the urgent need for regulation continued to grow - most powerfully, with the findings of the inquest into the death of Molly Russell.

But whilst delays made 2022 quite a frustrating year, there have been some positive developments, both in the UK and globally. For Clean Up The Internet, 2022 saw us make significant headway in advancing our own proposals to tackle the misuse of anonymity. The UK’s Online Safety Bill had, by the end of the year, restarted its progress through parliament, and retained a large amount of cross party support. In the European Union, the Digital Services Act package entered into force, setting new transparency and safety obligations on social media platforms which have the potential to raise standards globally. And the potential for regulation in one jurisdiction to have a global impact was reaffirmed in August when the Californian state legislature adopted an Age Appropriate Design Code, based on the UK code of the same name.

Clean Up The Internet’s main breakthrough of the year came in March. Until then, the government had consistently and often vocally ruled out any measures to tackle the misuse of anonymity on social media platforms. The December 2020 response to the Online Harms White Paper consultation stated firmly that online safety legislation would “not put any new limits on online anonymity”, and initial drafts of the Online Safety Bill reflected this position. By the beginning of 2022 pressure for a change of position had grown, including from several parliamentary committees and a significant number of MPs, who were expressing support for Siobhan Baillie MP’s Identity Verification Ten Minute Rule Bill. On 1 March, the government announced that they would be introducing measures very much in line with our proposals: a requirement for platforms to offer users an option to verify their identity, alongside options to filter out interaction from non-verified users.

This was a huge step forward. The government had finally accepted that platforms’ laissez-faire approach to user verification and anonymous accounts was a design choice, which fueled harmful behaviour, and that this could be changed through sensible regulation. Problems associated with anonymity could be tackled through proportionate solutions which restricted misuse of anonymity, and gave users more choices to avoid negative behaviour from anonymous accounts, without unduly limiting anonymity’s legitimate uses. The government published a revised Bill a few weeks later, which adopted this new approach. The revised Bill introduced a new “user verification duty”, combined with new “user empowerment duties”, including options to filter out non-verified accounts.

We were delighted at this progress, but we also identified a few gaps in the revised wording. Most crucially, the current version of the Bill fails to set out any definition for what counts as a “verification process”, beyond stating in Clause 57(2) that “the verification process may be of any kind (and in particular, it need not require documentation to be provided)”. Whilst Clause 58 requires Ofcom to produce guidance, it doesn’t require that the guidance should set out any principles or minimum standards for how a verification process works. As we observed at the time:

We assume that the government’s intention here is to make clear they do not seek to impose a one-size-fits-all approach[…]We strongly support allowing for flexibility, innovation, and choice in how verification is implemented, including approaches which do not rely on ID documents. However, flexibility will need to be combined with some minimum standards to ensure that the verification is meaningful. Platforms mustn’t be allowed to claim any old mechanism constitutes “verification”. At the moment the Bill offers no definition at all of what constitutes “verification”, and platforms could try to exploit this to maintain their current approaches.

These weren’t purely hypothetical concerns. We pointed out at the time that Twitter had, as recently as 2021, claimed that the fact a user had any email address, however untraceable it might be, constituted “verification”. And later, in November 2022, Elon Musk offered yet more evidence of the dangers of allowing platforms to set their own standards for a verification process. In one of the more high profile moves which followed his Twitter takeover, he offered blue ticks denoting “verified status” on Twitter to any user paying an $8 monthly subscription. This so-called “verification” process led to a rapid proliferation of blue tick accounts impersonating brands, celebrities, and experts, and/or peddling scams and disinformation.

We have put forward a few modest amendments which would strengthen and build on the government’s wording, adding an overall definition of “verification” to the Bill, and tasking Ofcom with setting minimum standards for platforms’ verification systems. We remain optimistic that these amendments can be incorporated in 2023 before the Bill continues its passage through parliament. More than one of the ministers who has held the Online Safety Bill brief over the past year indicated that they were receptive to our suggestions. Cross-party parliamentary support for the User Verification Duty is likely to include supporting amendments which would ensure it delivers in practice for end users.

We hope to see other improvements to the Online Safety Bill as it passes through the House of Lords - including tightening up provisions relating to Age Assurance and Age Verification, and removing excessive Secretary of State powers which risk compromising the independence of the regulator. We will look forward to working with a range of other experts and organisations in pushing for these improvements. We don’t expect the final Online Safety Act to be perfect, or the last piece of online safety legislation the UK will need. But we do believe it should be an important - and long overdue - move away from the failed era of self-regulation, and establish significant regulatory foundations which can be built upon in the future.

So in many ways, for Clean Up The Internet, 2023 begins much like 2022 did - with an expectation that the Online Safety Bill will complete its passage through parliament, and plans to work with parliamentarians from all parties to improve and strengthen it. The strictures of parliamentary procedure mean the Online Safety Bill needs to be passed this year. Assuming that finally does happen, we’d expect Clean Up the Internet’s attention to then turn to Ofcom as it becomes tasked first with issuing guidance and then with enforcement. We will also look forward, with (fingers crossed) provisions to tackle the misuse of anonymity finally on the statute book, to considering what Clean Up The Internet should work on next.

184 views0 comments
bottom of page