top of page
Search
  • Writer's pictureDavid Babbs

New report from HOPE not hate re Online Harms

Earlier this month, anti-extremism group HOPE not hate published a new report, titled "A BETTER WEB: regulating to reduce far-right hate online", which explores how UK Online Harms legislation could help tackle far-right hate and extremism. It’s grounded in years of experience battling extremism and a deep understanding of how it thrives online, and presents policy makers with a set of practical and balanced suggestions for how regulation should work.


HOPE not hate has tracked the far-right’s increasing use of digital platforms over the past decade. The report explains how different platforms are used for different purposes, with “recruitment, propagandising at scale, disruption of mainstream debate, and the harassment of victims” taking place on the large-scale mainstream platforms owned by Facebook, Google and Twitter. Smaller, more niche platforms are used for “further radicalisation and organising”. In the past HOPE not hate has frequently alerted the social media platforms to the ways they are being used for extremism, and sought to pressure them to take action. The group now argues that such efforts from civil society groups have proved necessary but insufficient in tackling the problem. They argue there’s now a clear need for government action in the form of regulation and enforcement.


A strength of this report throughout is that it avoids making grand claims of perfect solutions. It makes the case for why it’s time for state regulation, but there’s no suggestion that this should replace the “vital work” currently being done by civil society. It makes a convincing argument for how state regulation could make life harder for the far-right, but there’s no suggestion that it could eliminate extremism entirely. And there’s a welcome recognition that online harms happen within a broader societal context - and that therefore part of the answer lies in measures far beyond the scope of a tech regulator, such as for example action to address broader systems of prejudice and injustice, or improved investment in digital literacy.


HOPE not hate is broadly positive about the thrust of the suggestions contained within the 2019 Online Harms White Paper. The report identifies these as being a statutory duty of care; an independent regulator (likely Ofcom) which creates codes of practice to underpin the duty and holds enforcement powers should companies fair to follow them; and enhanced transparency obligations for tech companies. It highlights that, under the White Paper proposals, responsibility for implementing content moderation would continue to reside with the platforms, but with improved levels of accountability and transparency, and that there would not be - as is sometimes suggested by what the report elsewhere terms “more fundamentalist free speech organisations” - any direct removal of content by either government or regulator.


Perhaps wisely, HOPE not hate focuses its commentary on the 2019 White Paper itself. Beyond a brief mention of the initial response document published in February 2020, they avoid any attempt to describe or interpret the numerous changes of personnel at DCMS, the myriad hints and partial statements, which have occurred since the publication of the White Paper. The report’s recommendations therefore focus on building on, and filling in some of the gaps, in the White Paper’s thinking - and offers suggestions on how best to address some of the criticisms which have been levelled at it since.


One of the report’s most helpful contributions is its careful exploration of the “legitimate concerns over privacy and freedom of expression”, which arise in relation to proposals to tackle Online Harms. In a debate which can often feel bogged down in purist positions, HOPE not hate manages to take a more balanced approach. At the heart of the analysis, grounded in years of grappling with such questions in relation to strategies to tackle fascism, is a recognition that a rounded consideration of freedom of speech requires a consideration of who is being silenced or excluded, as well as who is speaking.


"At present, online speech which causes division and harm is often defended on the basis that to remove it would undermine free speech. However, in reality, allowing such speech to be disseminated only erodes the quality of public debate, and causes harms to the groups such speech targets. This defence, in theory and in practice, minimises free speech overall. This regulation instead should aim to maximise freedom of speech online for more people, including those from minority backgrounds whose speech is consistently marginalised online and elsewhere. This principle should be front-and-center of the government’s public information campaign surrounding this bill, as it otherwise stands to be misconstrued as an infringement upon free speech. For this reason, any such campaign also ought to be clear about what the regulator will and will not be able to do, so that it cannot be misrepresented."

The report goes on to suggest that over time such a formulation could be refined through research and monitoring to track its impact, including “a measure of who is not on a platform to highlight and understand marginalisation through harmful and divisive speech” and through ”following up with those who have deleted their accounts or become inactive to ask why.” They propose that a mechanism for “super-complaints” could provide further room for debate and independent challenge as to how the balance is being struck - for example “complaints raised against deplatforming could be brought by more fundamentalist free speech organisations or activists”.


Another important argument which HOPE not hate develops, again underpinned by its in-depth, practical experience of how the far-right operates online, is that “extremists’ abilities to perpetrate online harms are often exacerbated by the design of sites”. They argue that an effective duty of care should seek to address this link between online harms and platform design:

“Build into the duty of care prohibitions and recommendations on platform technology design. Prohibitions could be against designs known to cause harm, for example, particular recommendation algorithms known to lead to ever more extreme content. Recommendations could include best practice on platform technology design, and this should be open to revision given further research.”

The report highlights algorithmic recommendation systems as one important example of such a design factor which requires further scrutiny. Another, which Clean up the Internet has highlighted, is the approach which platforms take to anonymity and identity verification. The available evidence suggests that the current laissez-faire approach to identity management on social media platforms - which enables large-scale identity concealment and identity deception - plays an important role in online abuse. We’ve made suggestions for how different designs could restrict the harm caused by identity deception and concealment whilst safeguarding legitimate uses of anonymity. As with regulation of algorithms, regulation of platforms’ approaches to anonymity should seek to develop best practice on identity management, and be open to revision as more research is conducted.


The final chapter of HOPE not hate’s report locates its suggestions in a wider context of efforts by democratic governments to regulate against Online Harms, focusing on the NetzDG, which was introduced in Germany at the beginning of 2018. It highlights that some of the concerns raised at the time of the law’s passing by libertarian and “anti-censorship” groups - that it would lead to huge increase in content being taken down - haven't really been born out.


However, HOPE not hate flag a bigger issue which has emerged with the approach taken by the NetzDG - that increasing legal pressure on platforms to moderate content, without introducing sufficient transparency or accountability for how the platforms take moderation decisions has had the effect of “arguably, effectively “privatising” the judiciary in this context”. The implication seems to be that the approach being so far suggested by the UK government - of platforms being held to a statutory duty of care and regulatory standards, and a level of transparency, but left with some freedom to develop their own approaches to fulfilling that duty and implementing those standards - is an improvement on that pioneered in Germany.


Overall, this is an excellent report that makes a compelling case for the government to get on with introducing regulation, and offers some sensible suggestions for how to get it right. The success of far-right groups and ideas in exploiting the current architecture of social media - and the current lack of regulation - harms individuals and vulnerable groups, and threatens democratic institutions. HOPE not hate doesn’t claim that Online Harms legislation will solve the whole problem. But they make a strong argument that it has the potential to help - and that it’s the government’s responsibility to try. If there's one criticism that could be levelled at the report, it's perhaps that it would have been even better for this analysis to have been offered last year, during the original White Paper consultation. However, the government's repeated delays since then mean it's still very timely. It will be a real boost to the cause of effective platform regulation to have HOPE not hate's input as the legislative process finally gets underway.



bottom of page