top of page
Search
  • Writer's pictureStephen Kinsella

Looking back on 2023

This time a year ago, I wrote that “for the UK’s online safety agenda, 2022 was a year where much was discussed but not enough changed”, with social media platforms still “hospitable environments for abuse, disinformation and fraud, and seriously dysfunctional as virtual ‘public squares’”. In certain respects, I could write exactly the same about 2023. Social media certainly continues to be (entirely avoidably) unsafe and dysfunctional. However it’s also been a year of legislative progress.  Though social media users continue to wait for changes which improve their everyday experience, at least in 2023 we finally made some decisive steps towards establishing the regulatory framework to force social media platforms to clean up their acts.

 

The primary focus of Clean Up the Internet in 2023 was the legislative process for the Online Safety Act (OSA). In 2022 our big breakthrough had been to persuade the UK government to accept the principle that platforms should offer their users ways of verifying themselves and avoiding interactions with non-verified users. In 2023, we sought to build on this breakthrough by making sure parliament got the detailed wording of the OSA right.

 

For much of the year, this meant working with members of the House of Lords, helping peers understand the harms fuelled by anonymous and fake social media accounts and how the legislation could be tightened up to ensure they were tackled effectively. As part of this, we conducted analysis of social media fraud and how fake social media accounts are a major enabler for fraudsters. We detailed how a vast amount of fraud is initiated on social media, amounting to 10% of all crime against individuals experienced in the UK. We looked in detail at different forms of fraud which take place on social media, and were unable to identify a single form which doesn't make use of fake accounts. We set out how offering users identity verification, combined with easy-to-use tools to spot and avoid non-verified accounts, could be an extremely powerful fraud prevention measure.

 

We worked with a cross-party set of peers to push amendments to tighten up the Bill’s provisions, with mixed success. Our amendment to require that the filter on non-verified accounts worked "effectively" was one of the very few amendments from any quarter which the government accepted at Lords committee stage. Our other amendments that sought to define and set minimum standards for what counts as “verification”, and to clarify that a user’s verification status should be visible to other users, didn’t make it onto the face of the Bill. However, the debates on those amendments extracted some useful clarifications from the government at the dispatch box. Siobhan Baillie MP then extracted some further reassurances from the minister during ping pong with the House of Commons.

 

Overall, we would have liked a little more of the detail in the primary legislation, clarifying some aspects of how the user verification and user empowerment duties should work, to ensure Ofcom could move to enforcement quickly and confidently. But we were nonetheless very pleased with where we ended up. Clean Up The Internet helped ensure that the OSA includes a clear duty on bigger (“Category One”) platforms to offer verification and enable users to filter non-verified accounts. In addition to the wording of the Act, our work with parliamentarians extracted clear statements from the ministers that they considered Ofcom to have all the powers it needs to recommend the kind of regime we were proposing. That felt like a good result, especially when we consider the distance travelled from the government's position when we launched Clean Up The Internet just four years ago, when the Online Harms White Paper stated that the government would “not put any new limits on online anonymity”.

 

The OSA received Royal Assent in November, and Ofcom responded quickly by publishing a consultation on its proposed regime to tackle illegal harms. It's huge, some 1900 pages, and we know we aren’t alone amongst civil society organisations in taking some time to digest it all. Our very provisional view is that Ofcom’s analysis of the risk factors which enable illegal content to proliferate on social media platforms is reasonably thorough, and correctly identifies many of the main drivers of harm online - including functionalities related to anonymous and fake accounts.

 

However, the safety recommendations currently being contemplated by Ofcom to mitigate these risks, which if platforms follow will mean they are considered in compliance with the illegal content duties, appear extremely unambitious. We find it hard to imagine their combined impact being a huge reduction in illegal content, or a significant improvement in users’ experience. There seem to be yawning gaps between Ofcom’s analysis of the problems, Ofcom’s proposed solutions, and the UK government’s oft-stated objective of making the UK “the safest place in the world to be online”.

 

For example, our own work has identified the hugely significant role which anonymous and fake accounts play in enabling fraud. Fraud is the most commonly experienced crime in the UK, amounting to 40% of all crime against individuals in the UK. Around a quarter of this fraud is initiated on social media - i.e. about 10% of all crime - causing not just huge financial loss, but also huge distress. Ofcom’s risk profile correctly identifies many of the risk factors which enable fraud, fake accounts prominent amongst them.

 

Yet, Ofcom’s recommendations only relate to “notable” and “monetised” verification schemes, with the aim of reducing the risk of users being scammed by fake accounts which are impersonating a famous person. In and of itself, this recommendation is welcome - indeed, we highlighted some of the risks associated with a conflation of “monetised” and “notable” verification schemes, when “Twitter Blue” was first launched. But such a recommendation will not address the majority of ways in which fake accounts are exploited to perpetrate fraud.

 

Part of Ofcom’s justification for not making further recommendations on verification at this stage appears to be that this is not the last time they will consider the issue. It's a quirk of the way that Act is structured that the user verification and user empowerment duties are in a separate part to the illegal content duties. The verification and empowerment duties apply only to larger, “Category 1”, platforms, and according to Ofcom’s current timetable its consultation on those measures will have to wait until early 2025. As well as having some concerns about the length of this wait, we are not currently convinced that it makes logical sense for Ofcom not to consider a recommendation on user verification as a means of reducing illegal harms, under the illegal content duties, simply because it will later have to consider verification for Category 1 platforms under a separate part of the Act.

 

To their credit, in addition to the formal consultation mechanisms, Ofcom have indicated a willingness to enter into further dialogue with civil society groups about such matters. We will certainly take them up on this offer. But at present, given the prevalence of social media fraud, and fraud being a priority offence under the OSA, we are struggling to see how a recommendation which is limited to such a sliver of the problem and to such an unambitious timetable can be considered a proportionate response. Probing Ofcom’s positions, and encouraging them to go further, more quickly, will clearly be a major piece of work for Clean Up The Internet in 2024.

 

Whilst the OSA dominated our work in 2023, and for good reason, it was not the only show in town. In May 2023, the government had announced a “Fraud Strategy”, which included a target of reducing fraud by 10% by the end of 2024. We argued that meeting this target would require new measures to tackle fraud on social media, including on anonymous and fake accounts - and that their implementation would need to proceed more quickly than the timetables envisaged in the OSA. December saw the launch of the government’s “Online Fraud Charter”, a voluntary agreement between the government and a variety of tech companies. This included a few welcome commitments on verification. Most notably, “standalone online dating platforms” have committed to “give users the choice to verify their identity on platforms to allow other users to know they are genuine, allowing users to opt to interact with verified people only”.

 

The inclusion of new measures on verification in the Online Fraud Charter was a hugely welcome development, but it does also beg a few questions. Firstly, if the UK government and the online dating industry are able to recognise the relationship between fake profiles and romance fraud, and to agree on a measure to tackle it, why has Ofcom not included anything similar in its illegal content consultation? Secondly, dating and romance (and therefore sadly also romance fraud) takes place on general social media platforms like Facebook as well as on dating platforms like Tinder - so why are only “standalone” dating platforms being encouraged to offer verification? Finally, if online dating platforms are able to commit to implement changes within six months, why is the pace of change so slow when it comes to the (better resourced) social media platforms?

 

Beyond the UK, the EU’s Digital Services Act began to take effect from August 2023. Whilst it does not include the same specific provisions on user verification which Clean Up The Internet helped secure in the OSA, the DSA does aim to tackle a similar set of problems, and similarly to focus on risk factors which drive the proliferation of illegal content. We therefore hope there might be some opportunities for us to gain a hearing for our proposals to reduce harm from anonymous and fake accounts, as the commission fleshes out its approach to implementation. We also think that requiring social media companies to offer their users verification could be become an interesting use case for the new EIDAS2 regulation, requiring EU states to offer their citizens access to a digital identity wallet.

 

So to conclude, in both the UK and across the EU, we will start 2024, as we did 2022 and 2023, with social media platforms far too full of entirely avoidable harmful fake and anonymous social media accounts, and with ordinary social media users still denied basic tools to protect themselves. However, in both the UK and in the EU, the legislative and regulatory framework has moved on significantly in the past 12 months. That’s surely grounds for some optimism as we prepare for 2024.

 

For Clean Up The Internet, it will mean some shift in focus. For the past few years we have first and foremost been seeking to persuade legislators of the need for action, and of the potential for well designed, proportionate regulation. Now, with regulatory frameworks in place, we must persuade regulators to use their new powers effectively. In doing so we must strike a balance between challenging those regulators when they lack ambition, and providing them with evidence and encouragement as they undertake a genuinely new and difficult effort to regulate some of the biggest companies the world has ever seen. It promises to be a very interesting, and important, year for us.

bottom of page