Horizon Blog

Everything in Moderation mid project blog – legal issues in moderation

We are halfway through our Everything in Moderation project and have had a productive six months working to understand the issues in moderation and encryption from a variety of angles. This has included two meetings with our advisory group, a series of interviews with online community moderators and work on understanding the legal and policy landscape. We have also been successful in winning additional funding to run an associated project in Zimbabwe.

We have been speaking to people across different platforms including Facebook, Reddit, WhatsApp, and Discord and are still looking for participants.  If you either (voluntarily or professionally) moderate an online community (e.g. Facebook group, subreddit, discord server) or are part of an online group (e.g. WhatsApp) – where you have had to deal with unwanted behaviour – please get in touch! We are also about to start a series of interviews with industry and policy stakeholders to ensure that we understand the issues from different angles.

We would like to take this opportunity to provide an overview of current regulatory frameworks which apply to moderation on open social media platforms, as our crack legal team have been working hard to analyse the legal landscape surrounding moderation, which is currently in a period of great flux.

Social media platforms, characterised as Web 2.0 (user-moderated platforms), are largely exempted from civil and criminal liability for third party content (USA section 230 of the Communications Decency Act of 1996, e-Commerce Directive 2000, UK Electronic Commerce Regulations 2002). At the same time, these platforms have created a self-regulatory framework (policies and terms and conditions) requesting their users to avoid violent, threatening, harassing and hate-filled posts and report such content. Platforms’ employees review these reports and decide upon them, following platform guidelines. Platform users can also report illegal content to the law enforcement authorities and demand their legal protection. Public authorities can request and have access to users’ account info and content, according to applicable law and procedures.

During recent years, the phenomenon of misinformation led platforms to search for adequate tools and technical solutions to counter misinformation and disinformation, and cooperate with regulatory bodies (e.g. EU Code of Practice on Disinformation). In addition, the Web 2.0 ecosystem as it grows has been characterised by a severe complexity of power imbalances that led to the awareness that more transparency is needed, to understand how platforms moderate content and how their users access information in their personalised profiles in these applications. Thus, the EU Digital Services Act attempts to impose new related obligations to platforms. The proposed, and now paused, Online Harms Bill in the UK also attempted to impose the responsibility to tackle and remove illegal material online on user generated content platforms, particularly material relating to terrorism and child sexual exploitation and abuse. In addition, the proposed UK Online Safety Bill aimed to lay down rules in law about how platforms should deal with harmful content in end-to-end encrypted applications. Human rights organisations, NGOs, technologists, and security experts have alerted that facilitating access would put data safety and users’ privacy at risk, and it could lead to general scanning of populations, and a significant chilling effect on peoples’ rights to free expression and association. We intend to further explore how the new regulatory framework, as well as new technological developments (e.g. disinformation with deep fakes, new forms of user interaction by using avatars, new encrypted technology), will change these communication ecosystems and create new regulatory challenges.

Written by Liz Dowthwaite and Anna-Maria Piskopani

Tags: , , , , , , , , ,