©2020 Clean Up The Internet. 

Search
  • David Babbs

A round-up of developments relevant to the UK Online Harms agenda

It would be possible, in theory at least, for the big tech platforms voluntarily to address the design flaws which contribute to a degradation in political discourse. This could include voluntarily tackling the issues which Clean up the Internet has identified with the way they currently approach anonymity, to restrict its abuse for purposes of online abuse and misinformation. However, in practice this seems unlikely – it certainly hasn’t happened to date.


Clean up the Internet therefore believes that government regulation has an important role to play in tackling the degradation of online discourse. We support the idea of legislation to ensure that the big platforms don’t focus only on their own commercial interests – and instead are required to act to design and manage their services to address online harms and with a view to ensuring good societal outcomes and appropriate protections for individual users.


Overall, momentum does seem to be building behind the idea that it’s time for greater regulation of tech platforms in the UK. Here’s an attempt at a round-up of relevant developments which we’ve spotted since the UK General Election of 12th December 2019. This list probably isn’t exhaustive and we’d welcome suggestions for things which should be added.



Government gearing up to announce Online Harms legislation?


The second Queen’s Speech in just over 2 months took place on 19th December. The freshly re-elected Boris Johnson government re-stated an intention to bring forward legislation to address Online Harms. The Queen’s Speech and accompanying notes were both fairly light on detail, and failed to specify timings. One notable difference between the latter Queens Speech and that held pre-election in October was that a commitment to a phase of pre-legislative scrutiny was dropped in the second announcement.


A lack of detail on timings for its Online Harms agenda led to concern that the government was displaying insufficient clarity or sense of urgency. This impression increased when statements by Baroness Morgan, the newly enobled Culture Secretary, and Matt Warman, digital minister, both failed to mention a timetable. This sense of uncertainty was increased by the lack of clarity as to the future of the government’s plans for age verification of adult websites – which had been legislated for in Part 3 of the Digital Economy Act 2017 but never implemented. The government has stated that it would deliver the Digital Economy Act objectives through its proposed online harms regulatory regime, but with no details as to when this would all come into effect.


The Times reported in late December that the government was due to publish its plans “in the next month”, and that they’d include a statutory duty of care, a tough enforcement regime, and Ofcom as the regulator. No such announcements came in January. The Telegraph then reported on 5 Feb that a similar set of plans to those mentioned by the Times is now before cabinet, with a prediction of an announcement next week to coincide with Internet Safety Day. Even if both newspapers’ reports on the timings were similarly off, it seems fairly clear that discussions remain live within the government.



Lord McNally and Carnegie UK table Bill in the Lords


Carnegie UK’s work on online harms, led by Will Perrin, Lorna Woods, and Maeve Walshe, has been at the forefront of the thinking around how regulation could work. They were amongst the first to develop the idea of introducing a statutory "Duty of Care", overseen by an independent regulator.


Lord McNally, a Liberal Democrat peer with a longstanding interest in questions of media regulation, teamed up with Carnegie UK to table a short paving Bill which would require the Secretary of State for DCMS to instruct Ofcom to take on the role of the interim Online Harms Regulator and prepare for the introduction of a statutory duty of care. This Bill was tabled as a Private Member's Bill on 14th January 2020.


Such a Private Member's Bill is unlikely to become law, but tabling it was nonetheless a significant development. It obliges the government to engage with the issues it raises and set out a response. It gives other parliamentarians an opportunity to explore the issues with a tangible option for legislation in front of them. There's been some legitimate criticism of the range of different ways in which the concept of a "duty of care" could be applied, and of vagueness in the government's own interpretation - debate of the McNally Bill would be an excellent opportunity to explore this in more detail, and to test the arguments of those such as Article 19 who believe that a "duty of care" is not an appropriate regulatory tool.


A date for a second reading, where a debate on the Bill’s principles could be held in the House of Lords, has yet to be agreed, and could yet be overtaken by events if the government does indeed announce its own plans.


You can read the full statement from Lord McNally here, the draft explanatory notes here, and draft Bill here.

DCMS Select Committee – new chair, diminished scrutiny for Big Tech?


The General Election last December led to the dissolution of parliament and along with it the dissolution of Select Committees. This meant a period of a few months without a DCMS Select Committee, followed by fresh elections for a chair on January 30th. Damian Collins MP, the previous Chair, was narrowly defeated by Julian Knight MP, a longstanding committee member.


Damian Collins MP was widely seen as an effective and independent Chair. Hehad angered some in government through his committee’s robust inquiries into digital campaigning during elections and during the 2016 EU referendum, including hearings on the role of Cambridge Analytica. His defeat by Julian Knight MP was therefore interpreted by some as revenge, inflicted by the Conservative whips, although it should be noted that committee chairs are elected by secret ballot of all MPs.


Encouragingly, Julian Knight MP has indicated a continued focus on Online Harms under his chairmanship, listing it as one of his three priorities alongside improving broadband and the future of the BBC:


“I am keen that the DCMS Committee looks again at online harms, particularly as we head towards statutory regulation. In the last parliament we considered the addictive nature of immersive technologies and our dependency on social media. We need to make sure that dangers are properly understood and protections put in place.”

Centre for Data Ethics & Innovation recommends regulation of targeting of online content


The CDEI is an independent body, tasked to advise the UK Government on the ethical dimensions of AI and data-driven technology. On 4th February, they published a report into the future of online targeting. Their recommendations took an encouragingly systemic approach (as opposed to focusing specific pieces or types over content) for example by recommending a new Online Harms regulator with power to lay out general responsibilities in codes of practice. They also highlighted the importance to improved transparency and third party access to data to enable proper scrutiny of platforms’ approaches.


The full report can be viewed here, and the chair of the committee wrote a useful summary for the New Statesman which can be read here. The report was accompanied by some detailed research into public attitudes to online targeting, which can be read here. Demos’s Centre for the Analysis of Social Media (CASM), whose director Alex sits on the Clean up the Internet Advisory Board, wrote a thoughtful response here.



Could a post-Brexit trade deal with US undermine regulation of Online Harms?


Following Brexit on 31st January, the government has stated that post-Brexit trade deals, including with the USA, are a key objective. Concerns about such a trade deal have so far focused on the implications for the NHS, or for food safety standards, but it is possible that the UK’s ability to regulate digital technology companies may also become an area of contention.


“Section 230” of the Communications Decency Act has long acted as a major limit on the US Federal Government’s ability to regulate the internet. It has many defenders in silicon valley, and amongst digital liberties groups, but also a growing number of critics who argue that shielding the likes of Facebook from any liability for the content they host is outdated and damaging. Section 230-style provisions were included with the United States-Mexico-Canada Agreement, and a recent deal with Japan. It appears likely that US-based global technology platforms like Facebook and Google will lobby hard for similar Section 230-style safe-harbour provisions to be on the table in any future trade negotiations with the UK.


Politico reported that outgoing chair of the DCMS Selection Committee, Damian Collins, has serious concerns:

"I think we should be really clear with the American government that that would essentially undermine the work we've tried to do on online harms," he told POLITICO about Section 230-style trade language. “It shouldn't be something we agree to as part of a trade agreement because it would kick away all the good work that's been done."

The New York Times, meanwhile, notes that whilst there is apparently strong support for Section 230 from the Trump Administration, it is under increasing attack from a “motley group of powerful companies” in the US.



Information Commissioners’ Office publishers “Age Appropriate Design” Code


On Jan 26 2020, the ICO published its “Age Appropriate Design” Code. This is the ICO’s statement on how it considers the GDPR should be applied with regards to children and young people. An interesting question posed by this code is how it will relate in practice to questions of age and identity verification which should also be addressed in future online harms legislation. The code advises companies to:

“Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.”

This would seem to have likely implications for the major platforms who currently state minimum ages but in practice do nothing to enforce them. For example the Open Rights Group, which is opposed to age verification, interprets the code as meaning "age gates" are likely as "companies must establish the age of users, or restrict their use of data. ORG is concerned that this will mean that adults only access websites when 'age verified'". The code will likely come into effect in the second half of 2021, and the way it interacts with other aspects of the Online Harms agenda remains to be seen.



Facebook’s Nick Clegg unsurprisingly urges caution on tax and online harms regulation


Facebook’s number 3, Nick Clegg, gave an interview from the World Economic Forum in Davos, and took the opportunity to urge a delay to a digital services tax (seen by many as a way to fund measures to tackle Online Harms) and to claim that Facebook had no “commercial incentive” to show users “hateful” or “extreme” content. Critics were quick to point out that Facebook also has insufficient “commercial incentive” to tackle the many problems with it’s platforms, which is why regulation is required. Children’s Commissioner for England Anne Longfield wrote a strongly worded public reply, accusing Clegg of “failing our children”. Separately a group of 100 children's charities wrote to Facebook criticising their plans to introduce end-to-end encryption on their messenging service.



Ofcom publishes 2020 priorities - including work on Online Harms


Ofcom's "planned programme of work" for 2020-21 lists Online Harms as one of five key priorities:

" we will work with Government on new and emerging policies protecting consumers from harmful content online, and more generally ensure communications services online work for consumers "

One key question posed by the government in their 2019 Online Harms White Paper consultation was whether Ofcom could be the best place for new regulatory responsibilities to sit - and the Carnegie/McNally Private Member's Bill suggests Ofcom as at the very least an "interim" regulator. This priority perhaps suggests Ofcom have been told to anticipate some ongoing role. There's an opportunity to respond to their workplan, here, with a deadline of 25 February.