An area of research that I am very interested in relates to online communications and controversies over how they should be governed. The past 15 years have seen a massive increase in Internet mediated communications, including in the use of personal websites, blogs, discussion forums, and social media platforms. The ease with which users all over the world can post content and interact with others bolsters freedom of speech and creates many opportunities for civic participation. At the same time, the accessibility of online communications, in particular in the form of anonymous posting – also enables the spread of harmful content online such as misinformation and hate speech. Interventions to limit the negative impacts of online communications often risk limiting their benefits too, and proposed policy interventions, such as the UK government’s Online Harms White Paper, can be highly controversial.
In a previous research project colleagues and I considered the dilemmas surrounding the responsible governance of online social spaces. How can the harms of online communications be addressed without impeding rights to freedom of speech? Legal mechanisms are in place to punish individuals for their posts in certain circumstances, but they lack capacity to deal with the frequency and volume of online communications. Furthermore, they are enacted retrospectively, after the harms have already been caused. Similarly, online platforms often rely on users reporting inappropriate content for investigation and potential removal. Once again this does not capture all of the potential harmful content and in the period of time before content is removed, it can often spread and cause considerable harm.
Given the limitations of these legal and platform mechanisms, our project team became very interested in the potential of user self-governance in online spaces. User self-governance relates to individuals moderating their own behaviours and also moderating those of others. We conducted an examination of Twitter data to analyse the effects of counter speech against hateful posts. This revealed that multiple voices of disagreement can quell the spread of harmful posts expressing racism, sexism and homophobia. When multiple users express disagreement with a post this can serve to discourage the original poster from re-posting and encourage others to reflect carefully before sharing the content. So, this user self-governance appears to be an effective real-time mechanism that upholds freedom of speech rather than undermining it.
It is likely that user actions to moderate themselves and others online will increase in significance. More and more users are choosing services which provide end-to-end (E2E) encryption, preserving the privacy of their interactions. In these contexts, new challenges and dilemmas emerge around governance. How can platforms and legal mechanisms deal effectively with misinformation and hate speech in E2E encrypted spaces whilst respecting rights to privacy?. It is crucial for research to investigate these questions and for this reason I am very excited to be part of the Horizon project Everything in Moderation. This focuses on the moderation of interactions in private online spaces. We explore the alternative strengths and weaknesses of legal mechanisms, technical mechanisms and user self-governance for moderation. As part of this we intend to engage with online communities that choose to interact in private spaces. We will explore how they establish their own norms and practices for communication, and how they deal with troublesome or harmful content when it is posted. Through this we will map out various forms of user self-governance practices that are adopted in private online spaces. This will contribute to current debates around best practices for online governance and the future of Internet mediated communications.
Written by Helena Webb