top of page
Search
  • Writer's pictureStephen Kinsella

Online Safety Bill - Lords committee stage

The Online Safety Bill is continuing its progress through committee stage in the House of Lords. This stage differs from a Commons committee stage in that it is held in the main chamber rather than a committee room, and all peers may participate rather than the debate being limited to a Public Bill Committee. However, as with the Commons committee stage, it is a stage for detailed line-by-line scrutiny of the legislation. Amendments proposed by other peers are debated, but they are rarely accepted or pushed to a vote. The government conventionally makes use of this stage to explain its own position, explore the proposals being put forward by other peers, and to introduce its own amendments.

Clean Up The Internet has been supporting four modest amendments at Lords committee stage, which would in combination strengthen the Bill’s provisions on anonymity and user verification, and ensure that they deliver the intended benefits for users. Versions of these amendments had previously been proposed in the Commons by Siobhan Baillie MP, Conservative MP for Stroud. For this stage in the Lords, they were tabled by Labour frontbench spokesperson Lord Stevenson, and Lib Dem frontbench spokesperson Lord Clement-Jones, and numbered 40, 41, 141, and 303.

Given the conventions of committee stage, it would have been unrealistic to expect all these amendments to be accepted, or voted on, at this stage. However we hoped that the debate would at the very least help tease out the government’s current thinking, and perhaps encourage the government subsequently to introduce its own improvements to the Bill, along the lines we are suggesting. Failing that, we hoped it would lay the groundwork for future amendments which could be put to a vote at Report Stage.

We were therefore delighted, when the government minister Lord Parkinson signalled that one of our amendments, amendment 40, would be accepted, by adding his name to the amendment alongside those of Lord Stevenson and Lord Clement-Jones. This was unusual enough for it to be remarked upon in the chamber, with Baroness Kidron commenting “may there be many more like that”. The amendment clarified that the optional filter offered to users who wish to avoid interaction with non-verified accounts must work “effectively”. This ensures consistency with other filtering duties, and removes a risk of weaker or delayed compliance.

The government’s response to the other three amendments was more along the lines we had expected at this stage. Responding to our amendments 141 and 303, seeking to define user verification, and to give Ofcom a clearer mandate to set minimum standards for how platforms’ verification processes should work, Lord Parkinson argued that doing so was unnecessary and would reduce the regulator’s “discretion”.

Several peers probed this position by questioning what the current wording might suggest as to whether platforms’ current so-called “verification” offerings might count. Lord Parkinson responded that the User Verification Duty intended something “different from the blue tick schemes and others currently in place, which focus on a user’s status rather than verifying their identity”. Helpfully, he stated very clearly that Twitter’s “Verified By Blue” subscription (which we analysed here) is “certainly not identity verification”. He was a little more tentative about Meta Verified (which we analysed here), saying that he would “write to confirm”. He was very explicit that he didn’t consider “user identity verification” to necessarily mean “that users must display their real name”.

Overall, the minister offered some welcome insights as to what the government considers is not verification, but no accompanying definition of what it is. He set out his disagreement with the definition proposed in our amendment - but did not offer a satisfactory answer as to why it is better not to have a definition of verification in the Bill at all, rather than for the government to bring forward its own. He was drawn into ad hoc assessments as to why current offerings from some of the major platforms shouldn’t satisfy the duty, but did not provide a compelling justification for not giving Ofcom a clearer framework to work within when assessing platforms’ future offerings.

Responding to our amendment 41, to require platforms to make a user’s verification status visible to other users, the Lord Parkinson’s position was in some ways more straightforward. He stated that the government was “not minded to accept it” because the government disagrees with the idea that users should be able to see other users’ verification status. The stated reason for this is that to provide users with this information would create a “two tier internet”.

This position is at least clear, and echoes the position the government put forward to us some weeks ago. However, it doesn’t make a great deal of sense when considered against the government’s stated aim of “user empowerment”, and appears inconsistent with other measures included in the Bill.

Lord Parkinson did not expand greatly upon what he meant by the risk of a “two tier internet”, beyond a concern that it could disadvantage users “who are unable to verify themselves for perfectly legitimate reasons”. We are not aware of this concern ever having been properly articulated elsewhere. However, we infer that he means the government is concerned that if it was transparent which accounts were verified and which were not, users might engage differently with accounts which aren’t verified, and this could potentially disadvantage a user who had chosen not to verify themselves because they were, for example, trans, or a whistleblower.

The weakness of this argument is that if an account which is choosing not to be verified has a “perfectly legitimate reason”, other users are likely to be able to understand this “perfectly legitimate reason” and so would not expect verification. An LGBTQ+ person who is using social media to explore intimate aspects of their identity, for example, would not be expected to necessarily be doing so under a verified real name. Other users will be more interested in the content of their posts than in their real identity. Similarly for a whistleblower-type account, the nature of their content, and often also their chosen handles e.g. “Secret Barrister”, already offer an obvious rationale for why they would be acting under a pseudonym and not verified.

Our research into how fake and non-verified accounts are misused suggests that the accounts that would be most likely to suffer a significant disadvantage if their non-verified status was transparent would be those seeking to conceal their identity to abuse others, or to create a deceptive identity to peddle fraud or disinformation. Our recent report on fraud, for example, finds that the ability to create deceptive social media accounts is a functionality exploited in almost all social media scams - and that were the OSB to make verification statuses visible to other users, then “check whether the user you’re interacting with is verified” could become an invaluable piece of fraud prevention advice.

In other words, the kinds of accounts most likely to find themselves in the lower tier of Lord Parkinson’s so-called “two tier internet” would be scammers, abusers, and peddlers of disinformation - surely a desirable outcome for the vast majority of users, including vulnerable adult users. The government appears to be passing over the opportunity to provide users with new tools to avoid known harms, in order to avoid some ill-defined, unproven risk of a “two tier internet”.

A second reason why Lord Parkinson’s concerns about a “two tier internet” make no sense, is that the Bill will require platforms to offer users an option of filtering out non-verified accounts. Indeed, during the same debate in which Lord Parkinson raised his objection to making verification status visible, he accepted an amendment to ensure the optional filter is “effective”. Transparency of verification status would offer users a lighter-touch option to consider, as an alternative to enabling the filter. It would empower them to bring their own judgement as to what a users’ choice not to verify might mean about their trustworthiness, when viewed in the context of the rest of the account profile and behaviour.

We will of course be making these points to the government, as will a number of concerned peers of all parties. Thus far quite a number of peers, including on the government benches, have expressed a concern that the government has been unusually unwilling to engage with constructive criticism of the Bill as it stands. There are a number of significant areas of contention, such as for example the absence of provisions in the Bill specifically to tackle violence against women and girls, where a considerable number of peers from all the parties have raised concerns. We have observed a growing number of polite-but-firm invitations for the government to, as Baroness Morgan put it, “reflect on the strength of feeling expressed by the House”, and requests for the government, as Baroness Stowell put it, to show “a little more responsiveness and willingness to consider movement”. We hope that the government will demonstrate a little more receptiveness and flexibility in the run-up to report stage, including on amendments relating to the user verification.

220 views0 comments
bottom of page