SafeSpacesNLP

SafeSpacesNLP: Behaviour classification NLP in a socio-technical AI setting for online harmful behaviours for children and young people

This project will explore the use of Socio-Technical Natural Language Processing (NLP) for classifying behavioural online harms within online forum posts (e.g. bullying; drugs & alcohol abuse; gendered harassment; self-harm), especially for young people.

Research Questions include:

  • How beneficial is iterative re-ranking using human feedback for modern NLP algorithms?
  • Adjust algorithms? Adjust data coding? How do we optimize this balance in a socio-technical AI system?
  • Can behaviour classification better target human interventions? Can we grow trust in AI through socio-technical teams?

Start date: 01 JUL 2021 – 30 JUN 2022

Project Team:

PI: Stuart E. Middleton, University of Southampton – Computer Science,  CoI: Anita Lavorgna, University of Southampton – Criminology

Elena Nichele, University of Nottingham – Applied Linguistics

Jeremie Clos, University of Nottingham – Computer Science

Santiago De Ossorno Garcia, Kooth Plc – Psychology

Radu-Daniel Voit, University of Southampton – PhD student Computer Science

Overview Video (17:56 – 21:40)

Outputs:

A Privacy-Preserving Observatory of Misinformation using Linguistic Markers – A Work in Progress: TAS ’23: Proceedings of the First International Symposium on Trustworthy Autonomous Systems Article No: 51, Pages 1–4

Predicting Stance to Detect Misinformation in Few-shot Learning, TAS ’23: Proceedings of the First International Symposium on Trustworthy Autonomous Systems, Article No: 53, Pages 1–5