top of page
Search
  • Writer's pictureStephen Kinsella

The long road to The Online Safety Act 2023

The Online Safety Bill has this week cleared its final legislative hurdle. The formality of Royal Assent remains, but the wording of what will soon be The Online Safety Act 2023 is finally settled.

The Act has been a long time coming. There have been some good reasons for it taking so long, alongside some bad ones. This is important and complex legislation, and it was positive that the government allowed time for proper consultation, debate, and deliberation. Most notably, the pre-legislative scrutiny process, by a joint committee of MPs and peers, ably chaired by Damian Collins, generated many useful recommendations which found their way into the legislation. Yet the legislative process was further elongated by a turbulent political context, with a changing roster of prime ministers, secretaries of state, and more junior ministers, and a departmental restructure, all contributing to delays and inconsistencies of approach.


A positive consequence of this elongated process was the level of expertise and collaboration it allowed to develop over time between both parliamentarians and civil society organisations. Within Parliament, there was a striking degree of cross party cooperation, and a notable number of parliamentarians (Siobhan Baillie MP, Alex Davies-Jones MP, Damian Collins MP, Maria Miller MP, Lord Clement-Jones, Baroness Kidron, Baroness Morgan, Lord Stevenson, to name but a few) whose sustained involvement led to deepening expertise. Outside Parliament, Ian Russell and the Molly Rose Foundation played a particularly important role, as did Carnegie UK who combined policy expertise with deft convening. Groups representing and involving victims and survivors of online harms played a crucial role, for example in securing government concessions to improve the Bill’s approach to violence against women and girls. It will be essential for these perspectives to also be well represented - and listened to - as the many codes and pieces of guidance are prepared.

Clean Up The Internet is very pleased with the progress we’ve made since September 2019, when we launched our proposals for how to tackle the widespread misuse of anonymity to abuse or deceive other users. From the beginning we’ve argued for an approach which neither bans anonymity, nor accepts the platforms’ currently laissez-faire approach. We argued that online safety legislation could require platforms to give users an *option* to verify their identity, with all users being able to see who is and isn’t verified, and an option to filter out interaction with non-verified accounts, as a category rather than having to deal with them individually.

When we started our campaign, there was an unhelpfully polarised debate between those who wanted to “ban” anonymity on the grounds it enabled abuse, and those who wanted to “save” it on the grounds that for some users it was a source of protection. Both perspectives had some merit - those calling for a ban had often experienced appalling levels of abuse, whilst those opposing it were able to highlight some instances where anonymity had indeed enabled someone to speak out. The government clearly struggled to see a way of reconciling these arguments - its 2020 response to the Online Harms White Paper Consultation noted “arguments both for and against preserving online anonymity”, and concluded that “the legislation will, therefore, not put any new limits on online anonymity”. When the government finally published its draft Bill, in July 2021, it still failed to contain any specific measures on anonymity or verification.

However, in the meantime, Clean Up The Internet, had been making gradual progress, gaining a hearing for our proposals for a more nuanced and proportionate response. More high profile examples of appalling online abuse, in which anonymity clearly played a role, meant the issue remained in the spotlight - most notably the racist abuse directed at some England men’s footballers after the Euro 2021 final. Siobhan Baillie MP, the Conservative MP for Stroud, who herself experienced anonymous abuse on social media, became one of our earliest and most determined and effective champions in Parliament. We submitted evidence to a number of parliamentary committees including the Lords Communications and Digital Committee, the Commons DCMS Committee, the Home Affairs Select Committee, and subsequently the Joint Committee charged with scrutiny of the draft Online Safety bill. Other organisations, from Kick it Out to Compassion in Politics, responded positively and offered support. The campaign gained coverage in a range of outlets, from the Guardian to the Express.

By the end of 2021 our campaign was gathering significant momentum. Siobhan Baillie MP’s Ten Minute Rule Bill, which she introduced in November 2021, was an important milestone which helped prove - including to the government - that our ideas were finding acceptance across Parliament [word check whether to use P or p]. In the same month, the Petitions Committee invited us to give evidence to their inquiry into online abuse (and subsequently endorsed our proposals), and just a few weeks later the Joint Committee on the Draft Online Safety Bill published its report, which also endorsed our approach.

On 25 Feb 2022, the Department for Digital, Culture, Media and Sport finally responded positively to our campaign. They announced new measures in their revised Online Safety Bill, to “protect people from anonymous trolls”, which adopted our proposal. This would include a “user verification duty”, requiring the largest platforms to offer their users an option to verify their identity, and “user empowerment” duties including offering their users an option to filter out non-verified users. We inevitably had some questions and didn’t quite pop the champagne corks, but this marked a huge shift in the government’s stance.

The change in policy was confirmed when the government tabled a revised Online Safety Bill a few weeks later. We identified some gaps in the wording - in particular around ensuring that a user’s verification status was visible to others - but overall felt that things were heading in the right direction, and that our remaining concerns were fairly technical and could be addressed as the bill passed through Parliament.

After significant progress over the winter of 2021-22, the remainder of 2022 was frustratingly slow. As the Bill entered Commons committee stage, our suggested amendments - along with pretty much every other suggested amendment - were knocked back by the government. The Bill [ditto on upper/lower case] then got subsumed in wider political turmoil. As Boris Johnson was replaced by Liz Truss, and then by Rishi Sunak, the bill’s progress was disrupted by a revolving door of lead Ministers, derailed timelines, and the reopening of previously settled policy debates. The publication in October 2022 of the findings of the inquest into the death of Molly Russell - whose tragic death five years earlier had helped pushed the government to originally commit to online harms legislation - provided a stark reminder of the human consequences of slow progress.

After a procedurally unusual return to committee stage, the Online Safety Bill finally completed its progress through the House of Commons in January this year. As it moved into the House of Lords, Clean Up The Internet continued to push for the wording of the User Verification Duty to be tightened up, to set a clearer framework for what Ofcom’s guidance should cover; to make it clear that the user empowerment filter on non-verified accounts should be effective; and to ensure that it was clear that platforms should allow users to see whether or not other users were verified. We were grateful to a number of peers for their willingness to engage with us on these issues, including Baroness Kidron, Baroness Morgan, Lord Clement-Jones, and Lord Stevenson.

We had an early breakthrough when one of our suggested amendments was accepted - to ensure that the filtering duty worked effectively, and to remove any implicit hierarchy between the non-verified accounts filtering duty and the other filtering duties contained in the same clause. On other matters, the government was reluctant to give ground, arguing that more specificity in the primary legislation was unnecessary, and that Ofcom could be relied upon, and should be given maximum freedom, to fill in the details via codes and guidance. Setting out a set of principles and minimum standards Ofcom must cover in its guidance was therefore resisted because, the minister Lord Parkinson argued, it was “important that Ofcom has discretion”.

There followed some helpful clarifications from the Dispatch Box of the government’s intent, and expectation of how the current wording would guide Ofcom’s “discretion”. At Lords Committee Stage, Lord Parkinson helpfully clarified, in response to points raised by Baroness Kidron, that he did not expect current so-called “verified” options offered by either Meta or Twitter/X to satisfy the duty. He also stated very clearly that Twitter’s “Verified By Blue” subscription (which we analysed here) is “certainly not identity verification”. He suggested that the requirements of the user verification duty “are different from the blue tick schemes and others currently in place, which focus on a user’s status rather than verifying their identity” and that charging users for verification as a premium feature was unlikely to be allowed as platforms will be “required to offer all adult users the option”.

When the Commons came to consider the Lords amendments, last week, further helpful clarifications were offered by the minister Paul Scully MP, after Siobhan Baillie MP sought one last time to raise the question of making verification status visible to other users. The minister felt it was not necessary to require this of Ofcom via an amendment to the legislation, but he was confident that it was within Ofcom’s discretion to require visibility if it considers it proportionate to do so. "I am pleased to confirm that Ofcom will have the power to set out guidance on user verification status being visible to all users. With regards to Online Fraud or other illegal activity, mandatory user verification and visibility of verification of status is potentially something which Ofcom could recommend and require under illegal safety duties."

So now, attention turns to Ofcom as it prepares to assume its extensive new responsibilities. As Paul Scully set out very clearly last week, it will be required to regulate platforms’ approach to anonymous, inauthentic, and non-verified accounts under at least two parts of the new Act.

Firstly, Ofcom will have to consider the role of anonymous, inauthentic, and non-verified accounts as it prepares its “risk profiles” relating to illegal content under Clause 99 of the Act. These risk profiles will in turn inform its guidance to platforms on how to fulfil their “illegal content duties” under Clauses 9 and 10. Ofcom’s Chief Executive, Melanie Dawes, observed to the Lords Communications and Digital Committee, just a couple of months ago, that “it is quite clear that anonymity can encourage people to act with greater impunity or to behave illegally”(our emphasis). It surely follows from this that Ofcom must identify functionalities which enable the creation and operation of anonymous accounts as carrying a heightened risk of illegal harm.

Not only that, but, as our recent research has highlighted, anonymous accounts play a central role in almost all fraud on social media, which is a “priority offence” under the Act. We therefore expect Ofcom to be issuing strong guidance to platforms on how under the “illegal content duties” they must mitigate risks associated with anonymous accounts. As Paul Scully said last week, this could quite possibly include “mandatory user verification and visibility of verification of status.

Secondly, under Clause 66 of the new Act, Ofcom will be required to prepare specific “guidance about user identity verification”, which sets out how platforms can comply with their duties, under Clause 65, to “offer all adult users of the service the option to verify their identity”. Ofcom will have to consider the findings of its own risk profiles related to illegal content when preparing the guidance. Paul Scully made it clear that the government believes this guidance could include “guidance on user verification status being visible to all users”. We would add, that as platforms will be obliged to give all users the right to be verified it would be perverse if they were not also given the right to make that verified status visible in a way that could be of assistance to other users.


In the Lords, Paul Scully’s ministerial counterpart Lord Parkinson made it clear that the government would not expect existing self-proclaimed “verification” systems such as “Twitter Blue” to be acceptable. Given this, and given the potential for sufficiently rigorous, widely accessible, universally visible user verification to make such a significant contribution to user safety, including tackling illegal harms, we would expect Ofcom to issue robust and comprehensive guidance.

All of this means the 2023 Act marks a lot of progress indeed since the 2020 position that the new law would “not put any new limits on online anonymity”. Ofcom has been handed clear powers to end social media platforms’ laissez-faire approach to anonymous and fake accounts, to insist they take proportionate steps to address these risks, and to require that this includes offering users verification. Recent comments from both the government and the leadership of Ofcom are grounds for optimism that those powers will be used properly - and Clean Up The Internet will certainly be doing all we can to encourage this. All our research and polling suggests that if Ofcom gets this right, then action to rein in the harms associated with anonymous accounts could be one of the most transformative, and popular, changes which the Online Safety Act will deliver.

313 views0 comments
bottom of page