top of page

Reducing crime when money is tight: is Ofcom doing its bit?

  • Writer: Stephen Kinsella
    Stephen Kinsella
  • 1 hour ago
  • 7 min read

In the UK Labour government’s recent spending review, the Home Office was widely regarded as receiving a tough settlement. The Institute for Fiscal Studies described Home Office spending as “squeezed”. The Institute for Government reported the department was “tasked with finding some of the biggest cuts”.


Perhaps mindful of negative headlines, direct spending on the police was, on paper at least, spared cuts. But the National Police Chiefs’ Council described the absence of any increases as “an incredibly challenging outcome for policing”. This disappointment reflects over a decade of very tight police budgets – with funding cut significantly in the austerity years after 2010, and having barely returned to pre-crash levels since. The NPCC statement concluded with a warning that “the amount falls far short of what is required to fund the Government’s ambitions”.


The “Government’s ambitions” to which the NPCC refer include pledges made at the last election to reduce crime such as halving knife crime, and halving violence against women and girls. These are ambitious targets and it’s understandable that the NPCC sounds weary of being asked to do more, without getting more resources. But the government may reasonably believe that there are other, more cost-effective ways that crime can be reduced. Non-policing measures to tackle the conditions that give rise to crimes happening in the first place can be a more affordable, and more durable, approach to reducing crime, and also allows the police to focus their resources more effectively.


Regulatory action to prevent crime enabled by Social Media is an obvious place to start. The volume of crime taking place via Social Media is high enough for effective interventions to make a significant contribution to the government’s targets. Levels of crime on social media are high, and still rising, and include some of the most commonly experienced crimes in the UK. The societal costs of such crimes are immense – a recent DSIT assessment put the societal costs of a subset of online harms at nearly £30billion per year. These are also crimes which traditional after-the-event policing has struggled to make much of a dent in.


For a start, fraud targeting UK citizens represents an estimated 41% of all crime, according to the Crime Survey for England and Wales for the year ending September 2024. That makes it by far the most commonly experienced crime. Around 80% of Fraud is cyber-enabled, and the National Crime Agency describes social media platforms as “a key facilitator of authorised push payments frauds”. The scale, complexity and cross-border nature of online fraud makes it a notoriously challenging crime to police effectively. In other words, social media is a key enabler of the single largest category of crime experienced in the UK, and it’s a category of crime where it is hard to imagine investment of more police effort alone bringing the numbers down significantly. Surely regulating social media platforms to require them to provide a less hospitable and profitable environment for fraudsters makes sense for a government looking to bring overall crime levels down?


Social Media is also a key vector for violence against women and girls. The Economist Intelligence Unit found in 2020 that 38 % of women globally had direct experience of online violence, noting that the “bulk of the current efforts to counter online or offline gender-based violence focus on post-experience responses as opposed to prevention.” Social media platforms enable a significant number of gender-based offences, including controlling or coercive behaviour, stalking and harassment, and intimate image abuse. It’s hard to imagine the government delivering a 50% reduction in violence against women and girls without this including a substantial reduction in online offences.


This laundry list of crimes with a significant volume taking place on social media could go on – indeed fraud and gender-based violence are just two of over a dozen categories of crime identified as “priority offences” within the Online Safety Act. The designation of these “priority offences” reflected a recognition from government that user-to-user services play a significant role in enabling these crimes, and an expectation that the requirements introduced by the Online Safety Act would drive a reduction.


So does the implementation of the Online Safety Act mean that the government can be confident that reducing crime online will deliver a big chunk of its crime reduction targets, despite funding for the police and the Home Office being squeezed? Ofcom’s slow approach to implementation means we would have to wait another year for any impact from the Act to start showing up in the crime stats, given that many key provisions which are meant to reduce crime (such as the Illegal Content Codes of Practice) have only just come into force. But of greater concern is the content of the Codes themselves, which are widely regarded as underwhelming.


It’s surely revealing that neither Ofcom nor its sponsoring department, DSIT, have offered any estimates, let alone set any targets, for the level of crime reduction they expect the measures in the Illegal Content Codes to deliver. The Labour Government has made very clear crime reduction targets - (“Halve the level of violence against women and girls”; “Halve the incidents of knife crime”) which the Home Office will be accountable for. But it’s not obvious that Ofcom or its supervising department are keen to share the burden of delivering on these pledges. The Chair of the Commons Science and Innovation Committee, Chi Onwurah, pushed DSIT recently on what difference it expected the Online Safety Act to make to levels of fraud – the response, “the department is not aware of any specific estimate for fraud reduction by Ofcom”, is hardly encouraging.


Ofcom’s approach to fake and anonymous accounts is a worrying example of how its current approach risks missing big opportunities to prevent crime. Ofcom’s Register of Risks, which it produced as part of the Illegal Content Codes, recognised that fake and anonymous accounts are a major problem. It identifies CSEA, harassment/stalking/threats/abuse, controlling or coercive behaviour, proceeds of crime, encouraging or assisting suicide, Terrorism, foreign interference, drugs and psychoactive substances, firearms, knives and other weapons offences, as “key kinds of harm” linked to anonymous or fake user accounts. Yet despite this, Ofcom chose not to include any measures to tackle the misuse of fake accounts in its first version of the Illegal Content Codes of Practice, or in its recently published consultation on a first set of proposed revisions to the Codes.


Instead, Ofcom has deferred consideration of user identity verification measures until “Phase 3” of its enforcement process, which it has delayed even consulting on until 2026. Ofcom has proposed to include user identity verification as a “good practice step” for platforms in its draft Guidance on “A safer life online for women and girls” – but crucially, the “good practice steps” are non-mandatory. So despite having identified fake and anonymous accounts as a key driver of a vast range of priority offences, and acknowledging that user identity verification measures are a “good practice step” that could help, the earliest Ofcom may start to require platforms to do anything about it is 2027. 


Deciding not to consider user identity verification measures for inclusion in its Illegal Content Codes, and thus leaving unaddressed the risks from fake and anonymous accounts which Ofcom had itself identified, does not suggest a regulator ruthlessly focused on helping reduce crime. As well as choosing to delay any crime reduction benefits from verification measures by at least two years, Ofcom is proposing then to apply user verification measures only to the largest “Category One” platforms, rather than applying it on any site where there’s a high risk of criminality linked to the use of fake or anonymous accounts. That means, for example, no user verification requirements for dating sites, despite high and increasing levels of romance scams.


Ofcom’s reluctance to introduce measures to drive down crime enabled by fake and anonymous accounts forms reflects a broader timidity. It is important to stress that these are Ofcom decisions – the legislation would have allowed (and indeed seemed to encourage) the regulator to be more vigorous. A cross-referencing of risk factors for priority offences, which Ofcom has identified in its Register of Risks, with mitigating measures in the Illegal Content Codes, reveals many gaps. As the Online Safety Act Network of civil society groups observed, a “disconnect between the evidence of harm in the risk profiles and the mitigation measures in the codes of practice" is one of a number of flaws in Ofcom’s chosen approach to implementation, which “significantly impact the likely impact” of the Illegal Content Codes.


Ofcom’s decisions to produce weak codes, leaving key design features like fake and anonymous accounts unaddressed, are particularly disastrous because of the “safe harbour” provisions of the Online Safety Act. This means that if a platform is following the letter of Ofcom’s gap-ridden Illegal Content Codes, they will be considered compliant, even if that doesn’t significantly reduce crime on the platform. This surely wasn’t what legislators envisaged when including the safe harbour provisions – they envisaged Ofcom producing sufficiently stringent Codes such that platforms would have “earned” a level of certainty.


In the case of user identity verification measures, Ofcom is at least required by the Act to produce Guidance upon, and then enforce, some form of user identity verification duty for Category One platforms. Ofcom may have deferred consideration of user verification to the very last phase of OSA implementation, and then further delayed getting to that phase, but it is required by law to get there in the end. However, given the regulator’s slowness and timidity to date, we can’t take it for granted that its user identity verification Guidance will be sufficiently stringent. For user verification to deliver its considerable crime reduction potential, platforms will need to implement systems which are highly effective, accessible, and privacy-respecting. It’s hard to imagine platforms doing that unless Ofcom’s Guidance requires it.  And given its approach so far, it’s also not all that hard to imagine Ofcom’s Guidance failing to deliver.


Clean Up The Internet hopes, therefore, that the Home Office, the police, and all other parts of government with a stake in crime reduction, are watching Ofcom closely. If the government expects crime to come down, without spending more money on policing, then it will be essential for Ofcom to make effective use of its powers to address the online enablers of crime. We assume that concerns are being raised behind-the-scenes, but they may need to begin to be raised more publicly.


We’d suggest that this should include taking a particularly close interest in Ofcom’s approach to the user identity verification duty. Ofcom has accepted that anonymous and fake accounts are a key risk factor in a wide range of offences, and the user identity verification measures would help mitigate these risks, but it has yet to act. Ofcom's Guidance on user identity verification should be seen as an acid-test of whether it's going to do its bit to deliver the government’s crime reduction ambitions – something on which the jury is currently very much out.

Join our mailing list

Your data will be held in accordance with our privacy policy

©2020 Clean Up The Internet. 

bottom of page