top of page
  • Writer's pictureStephen Kinsella

The government’s “initial response” to the Online Harms consultation

The British government published its "initial response" to the consultation on the Online Harms White Paper,on 12th February. More than six months has passed since the closure of the consultation to which they were responding, which had been based on a White Paper issued in April 2019. However, despite this, the government emphasised that its latest announcement was only an “initial response” and that it is still not ready to set out all the detail. They justified this approach in the following way:

We are committed to taking a deliberative and open approach to ensure that we get the detail of this complex and novel policy right. While it does not provide a detailed update on all policy proposals, it does give an indication of our direction of travel in a number of key areas raised as overarching concern across some responses.”

There was much to welcome in the announcement. We were pleased to see a renewed commitment to introducing a “duty of care” for social media companies. This has the potential to focus regulation at the level of systems and design, rather than individual pieces of problematic content. It also goes a long way to addressing concerns raised in some quarters about a risk to freedom of expression - particularly when it comes to addressing harms caused by “legal but harmful” content. The “duty of care” approach has the potential to be influential globally – an innovative approach to digital regulation which other countries could copy.

We were also pleased to see confirmation that the government is minded to appoint Ofcom as the regulator in this area. We see this is a pragmatic way of ensuring that a regulator with the capacity and expertise to have an impact is ready much more quickly than if a new regulator were to be created from scratch. And it was also encouraging to see the government recognise the importance of freedom of expression – and that online harms regulation should aim to protect and enhance freedom of expression rather than to compromise it. The main proposals, particular the “duty of care”, are consistent with this recognition.

We are somewhat concerned about the pace of progress. It’s important to get legislation right, but it’s also important to recognise the urgency of the problem which this legislation is required to tackle. Online abuse, bullying and trolling are harming individuals, undermining democratic discussion, and excluding vulnerable people from debates every single day. Right now, disinformation spread online is making life harder as society responds to the covid-19 pandemic – and as humanity seeks to respond to other major challenges like hateful far-right extremism or climate change. When Digital Minister Matt Warman appeared before parliament to answer MPs’ questions, several raised concerns about the timetable – the former secretaryof state Jeremy Wright and co-author of the original White Paper said, quite bluntly, that it was time for the government to “get on with it”.

We are also concerned that whilst the original White Paper talked in some detail about a full range of Online Harms, and recognised their impact on the healthy functioning of British democracy, the “initial response” was far less detailed. There is much talk about addressing harm to individual children and “vulnerable people” but far less about the wider range of societal harms highlighted in the White Paper. We strongly support seeking to address harms which impact upon children and vulnerable people. But we hope that the lack of space in the “initial response” given to other types of harm reflects its relatively brief, “initial” status - rather than because of a scaling back of ambition. It is crucial that a regulator is empowered to look at a full range of societal harms, and that social media companies are required to be transparent about, and accountable for, the design decisions which they take.

One example of a societal harm which the new online harms regime must address is the problem of disinformation. The spread of disinformation is facilitated or impeded by design decisions taken by social media platforms. We have highlighted the extensive evidence of the contribution which anonymity and false identities make to the ease with which disinformation spreads on social media. Platforms constantly make design decisions such as how to manage user identity, what verification steps to offer users, and what rules and enforcement measures they take to restrict the abuse of anonymity or the use of false identities. These are design decisions for which a platform should be held to account by an independent regulator.

Another example of a societal harm which any regulator must be empowered to consider is the impact of abuse, incivility and trolling on the inclusivity and level of democratic debate – as well as on individual “victims”. The health of debate on the major platforms matters hugely to the health of democracy, as a digital “public sphere”. Yet at present discussions get derailed or polarised, and already marginalised groups get excluded and silenced. Again, design decisions by the platforms – including the way they manage issues of anonymity and identity verification - can encourage or discourage such harmful behaviour. The choices which platforms take about the design of their systems, with a view to discouraging incivility and abusive behaviour and making it easier for other users to avoid it, should be choices for which platforms are required to be transparent and subject to independent, regulatory oversight.

Another question to which the “initial response” does not provide clear answers is how to ensure a new regulator has proper teeth. This is particularly challenging given the size and global nature of the major tech companies. Experiences from other jurisdictions suggest that even large fines may not be sufficient to deter poor behaviour from hugely profitable companies. The White Paper recognised that fines may not be enough and suggested other options, such as criminal liability for company managers. Clean up the Internet has made an additional suggestion – that if a tech company has failed through its design choices to make an individual user accountable and therefore legally liable for material posted, the company itself should be forced to accept legal liability. We hope that the lack of detail in the “initial response” reflects government’s continued exploration of enforcement options, rather than a weakening of resolve to introduce a strong and credible range of enforcement powers.

Very soon after the publication of the “initial response”, the responsible Secretary of State, Baroness Morgan, stepped down. She was replaced by Oliver Dowden MP, the third Secretary of State since the publication of the Online Harms White Paper last year. There was also change lower down the hierarchy at DCMS, with Caroline Dinenage MP taking over from Matt Warman MP as the minister responsible (although Matt Warman remains in the department). Such turnover of personnel can introduce further delays, or create further uncertainty about the direction of future policy. Whilst both Dinenage and Dowden have expressed continued support for the Online Harms agenda, it’s early days and remains to be seen what differences of emphasis they may bring to their respective briefs.

Overall, Clean up the Internet is pleased that the government is continuing to pursue the Online Harms agenda, albeit with some questions still to answer about the detail and about the timetable. We see this as having the potential to create a regulatory regime which could have a big impact on the design problems currently plaguing online spaces – such as the ease with which anonymity and inauthentic identities can be abused. We also see it as having the potential to blaze a trail which other countries can follow. We’ll continue to engage with the process, and work with all those in civil society, industry and government who share our aims, to seek to ensure that this potential is met.



bottom of page