top of page
Search
  • Writer's pictureStephen Kinsella

Further reflections on the Joint Committee report on the Draft Online Safety Bill

We posted some first thoughts about the report of the Joint Committee on the Draft Online Safety Bill immediately following its publication. That first piece focused on the report’s positive treatment of our own specific proposals, regarding anonymity and user verification, and welcomed its emphasis on regulating the design, systems and processes of social media platforms.


Having now had more time to study the report’s 193 pages, here are some further, more detailed, reflections on what we consider to be some of its other most significant proposals.


1. Recommendation to replace Clause 11, which dealt with “legal but harmful” content for adults - in favour of a “a narrower, but stronger, regulatory requirement for service providers to identify and mitigate risks of harm in the online world that may not necessarily meet the criminal thresholds”.


This proposal would address one of the more confusing and controversial concepts in the draft bill; the approach to “legal-but-harmful” content. It was unhelpful to make so central to a piece of legislation the question of whether and how far what was now being tackled might previously or in other contexts have been “legal” – all that was necessary was to be more precise as to the behaviour now being regulated within the Bill. That, after all, is what legislation does: it addresses harms not previously covered under existing legislation, so that any discussion of whether the behaviour was lawful prior to the new legislation is redundant and confusing, serving only to undermine the intent of the legislation. The Joint Committee proposes instead to introduce a greater focus on systems and processes, through an explicit requirement for platforms to have in place “proportionate systems and processes to identify and mitigate reasonably foreseeable risks of harm arising from regulated activities defined under the Bill”. They recommend this be achieved through setting out a fuller definition of potential harms, with reference to existing “areas of law that are recognised in the offline world, or are specifically recognised as legitimate grounds for interference in freedom of expression”, combined with a “non-exhaustive list of design features and risks associated with them”, and which could be “amended by Parliament in response to the development of new technologies”. Ofcom would then be required to produce “risk profiles” which platforms would be expected to address in their own risk assessments, as well as a “mandatory Safety by Design Code of Practice” setting expectations on platforms for how they mitigate these risks.

It is through this mechanism that the Joint Committee recommends platforms be forced to address the risks associated with anonymous and pseudonymous accounts. It is also a mechanism through which societal harms such as disinformation, which loomed large in the original Online Harms White Paper, but is not directly tackled in the draft Bill, could be at least partially addressed. The Joint Committee suggests that design features which could be covered by a Safety by Design Code of Practice could include many of those commonly exploited by disinformation operations, such as “recommendation algorithms, frictionless sharing of content at scale, use of fake accounts and bots”. These are, after all, precisely the features that in the online world permit particular behaviours to have far more serious impact than they could in the offline world.


We agree this proposal would improve and strengthen the Bill. It would also address the concerns that under the current clause 11 too much power is left to the Secretary of State to define “priority harms”, with worrying potential implications for Freedom of Expression. And above all it would resolve the problem, raised by others including Ofcom themselves, that except in the case of Secretary of State-designated “priority harms”, platforms were only required to act to address risks which they had themselves chosen to identify in their risk assessments - in effect enabling them to set and mark their own homework.


2. Removal of category 1/category 2 distinction - in favour of “a more nuanced approach, based not just on size and high-level functionality, but factors such as risk, reach, user base, safety performance, and business model”

We shared the concern, expressed by many witnesses, that whilst it makes sense to focus regulation where it can address the greatest quantum of harm, the category 1/category 2 approach was a clumsy and overly formalistic way of doing this.

With reference to our own research and proposals regarding anonymity and user verification, it clearly made no sense that only “category 1” platforms should be expected to take basic steps to protect their users and act to mitigate the risks associated with anonymous abuse or deceptive, inauthentic accounts. Greater use of “risk profiles”, drawn up by Ofcom, is a sensible way of ensuring that smaller but high risk platforms are not exempt from effective regulation including of their approach to anonymity/pseudonymity.


3. Recommendation for the removal of the exemption for paid-for advertising - in order to bring into scope platforms’ design, systems and functionalities which enable online advertising.

Advertising is central to the business model of most significant social media platforms and acts as a major driver of decisions about systems and processes. This includes, we suspect, contributing to their reluctance to offer users identity verification - given the likely impact this would have on the genuine user and ad impression numbers that they are able to claim to advertisers.

Exempting this area of the platforms’ activity from the scope of the regulation therefore risked weakening its bite considerably. Given that the core service provided to the average social media user is normally “free”, it seemed incoherent to exempt from regulation the part of the service from which platforms make their profits. Many consumer organisations have also highlighted that advertising content constitutes a significant proportion of the content which users see, and at present is associated with a wide range of harms including fraud and scams.

The Joint Committee proposes to remove this exemption (clause 39(2)(d)), and therefore give Ofcom the power to consider platform functionalities which are associated with the delivery of adverts, and regulate the platforms as providers of services to advertisers. We think this makes sense, and is consistent with the broader thrust of their recommendations to tighten the regulatory regimes so as to focus on design systems and processes, rather than individual pieces of content. The proposal would represent a sensible division of labour with the ASA, which would remain the regulator of the day-to-day content of adverts and of advertisers.


4. Recommendations to strengthen protections of Freedom of Expression across several areas of the Bill - particularly by increasing the focus on platforms’ design, systems and processes.


The Joint Committee recognises that Freedom of Expression can be restricted both through overzealous action to tackle harmful content (which can lead to legitimate content being removed), and by a failure to tackle harmful content (which often has a silencing effect, and impacts otherwise marginalised and minority groups disproportionately), and that certain trade-offs are therefore unavoidable. It acknowledges that the wording of clause 12, requiring platforms to “have regard to the importance of” Freedom of Expression and Privacy rights, is weak compared to the draft Bill’s more process-based safety duties. However it also points out that at present platforms, as private companies, are bound by no such duties to consider Freedom of Expression at all. It also points out that the regulator, Ofcom, as a public body, would be bound by the stronger provisions of Article 10 of the ECHR.

The Joint Committee has not identified a way of strengthening the wording of clause 12. Instead it argues, in our view correctly, that taken together, many of its other recommendations represent a better approach to significantly strengthening protection for Freedom of Expression across the legislation. The recommendations which it highlights as having this effect include, “greater independence for Ofcom, routes for individual redress beyond service providers, tighter definitions around content that creates a risk of harm, a greater emphasis on safety by design, a broader requirement to be consistent in the applications of terms of service, stronger minimum standards and mandatory codes of practice set by Ofcom (who are required to be compliant with human rights law), and stronger protections for news publisher content”.


We agree with this analysis. We have long argued that a major advantage of tackling design-level issues - such as the platforms’ current laissez-faire approach to anonymity and user identity - is that it reduces the need to focus on the moderation of individual pieces of content, with all the challenges and trade-offs, and greater regulatory workload, which the latter would entail.


5. Recommendations to beef-up Ofcom’s powers to supervise platforms’ risk assessments - including by setting “binding minimum standards” for their accuracy and completeness, based on Ofcom-written risk profiles which should be “based on the differences in the characteristics of the service, platform design, risk level, and the service’s business model and overall corporate aim.”

We had raised the concern, as had many others, that the draft Bill would leave platforms free to decide what risks to include in their risk assessments, in effect allowing them to set and mark their own homework. We worried that this would mean platforms chose to underplay or avoid tackling harms where the required mitigations might challenge their business model.

We highlighted the dangers posed by anonymous and pseudonymous accounts as an example of a risk factor which platforms could not be trusted to evaluate satisfactorily - given that mitigations could impact their business model, including by deflating the numbers of ad impressions they are able to quote to their advertising customers. We pointed to the recent example of Twitter’s attempts to obfuscate the role of anonymous accounts in the abuse of England Footballers following the Euros 2020 final.


We therefore welcome The Joint Committee’s recommendation that “required content of service providers’ risk assessments should follow the risk profiles developed by Ofcom”. Taken together with The Joint Committee’s recommendations to strengthen Ofcom’s audit powers and to require the largest and highest risk platforms to commission” annual, independent third-party audits” of their risk assessments and transparency reports, this should greatly reduce a platform’s ability to ignore or underplay inconvenient risk-factors.


6. Recommendation that the Bill introduces a statutory system of regulation of age assurance - with Ofcom setting, via a Code of Practice, minimum standards for age assurance products or services.

The Joint Committee report highlights that a range of new regulations - including the child safety provisions in the original draft Bill, the Age Appropriate Design code which has recently come into force, and The Joint Committee’s own recommendations for the scope of the Bill to be extended to ensure pornographic sites are covered - are likely to increase the use of age assurance and age verification processes. It recognises the concerns raised by some organisations regarding the privacy implications of such processes, and the risks that some users could face accessibility challenges - but accepts the argument, made by many organisations including the 5Rights Foundation, that such concerns can be adequately addressed through improved governance.

We agree with this approach, and with the suggestion that Ofcom is therefore mandated to produce a Code of Practice on Age Assurance, to “establish minimum standards for

age assurance technology and governance linked to risk profiles to ensure that third-party and provider-designed assurance technologies are privacy-enhancing, rights-protecting, and that in commissioning such services providers are restricted in the data for which they can ask.”


Whilst checking a user’s age is not the same as verifying other identity attributes, identity verification (to mitigate risks associated with anonymous abuse and disinformation) will raise some similar issues about security, efficacy, and privacy. We would suggest that a similar approach could therefore be followed to ensure good governance of identity verification processes. Namely, Ofcom could also be mandated to produce a Code of Practice on Identity Verification, which could set minimum standards for identity verification processes and products to ensure they were secure, accessible, privacy-enhancing, rights-protecting, and effective. Ofcom could ensure that there was as much alignment as possible between standards for age assurance and those for identity verification, with obvious benefits for user trust and understanding.


We are highlighting here some of what we consider to be The Joint Committee’s most significant recommendations. But whilst for the sake of brevity in a blog post some degree of “cherry-picking” is necessary, we agree with The Joint Committee’s warning to the government against taking a “pick and mix” approach to the report. Doubtless the new DCMS ministerial team will wish to put its own stamp on things, and to consider additional suggestions from other committees such as the DCMS Select Committee and the Petitions Committee, which have also held inquiries considering the draft Bill, and to which we have also submitted evidence. However, the recommendations contained in this substantial report are best seen as a coherent package which, taken together, would improve on the draft Bill substantially.

There have been positive indications from the government that they are indeed willing to take on board the Joint Committee’s suggestions and make quite substantial changes to the draft Bill. Both the Secretary of State Nadine Dorries in her evidence to the Joint Committee itself, and Minister Chris Philp in his evidence to the Petitions Committee, appeared to indicate an openness to making substantial changes to the original draft Bill - including Chris Philp indicating to the Petitions Committee that the Government is “thinking very carefully” about our specific proposals regarding anonymity.

Assuming the government does indeed take on board the feedback on its draft Online Safety Bill, the result will be a strengthened Bill which enjoys greater support for its passage in parliament, and enhanced public understanding and support. This would be a credit to the work of The Joint Committee and its staff, and also a great demonstration of the value of such pre-legislative scrutiny in ensuring the quality of such important pieces of legislation.

bottom of page