Horizon Blog

UnBias – Research Update


UnBias was an EPSRC funded project seeking to emancipate users against algorithmic biases for a trusted digital economy.  The project comprised of four work packages, one of which was Nottingham-led and focused on running ‘Youth Juries’ with young people aged 13-17 years of age. Youth Juries discussed and debated issues around algorithmic biases, and put forward recommendations for how they would like to see the internet changed to be more age appropriate.

We recently had two papers published from our youth jury work.  This included a paper that was published in a Special Issue of the Journal of Information, Communication and Ethics in Society “… They don’t really listen to people”: Young people’s concerns and recommendations for improving online experiences

Another paper, relating to the methods used in the youth juries, was recently published in PLoS ONE

Alongside these recent publications, we were invited to be external reviewers for a Parliamentary Office of Science and Technology POSTNOTE on  Online safety education for children.

In addition to the publications and reviews, we have also been disseminating the work of UnBias by delivering talks and presentations.  Details for these talks are given below:

  • At the Consultation Launch on the Relevance of the Rights of the Child in the Digital Environment (available here) Elvira Perez Vallejos attended the House of Lords in March and was pleased to network with Sonia Livingston, Beeban Kidron and other members from ReEnTrust.
  • Elvira was invited to talk at the ‘Future of Tech for Mental Health’ and presented her work on the UnBias and ReEnTrust projects: https://vimeo.com/321275041/f60bb23c19
  • Ansgar Koene gave an invited talk about our work related to algorithmic bias and stakeholder engagement in May at the Purdue University “Policies for Progress” conference,
  • In June Ansgar was an invited participant at the Unicef workshop “Towards Global Guidance on AI and Child Rights”.
  • Ansgar gave a keynote presentation on the use of Standards to address issues of Algorithmic Bias, at the Women Leading in AI 2019 Conference in November.
  • Ansgar will be part of a roundtable discussion on “Misinformation, Responsibility & Trust” at the Internet Governance Forum 2019, in Berlin this coming November and has also been invited to give a keynote on algorithmic bias at the 9th International Conference on Imaging for Crime Detection and Prevention, in December.

Based on the work coming out of the UnBias project, Ansgar was offered a position as AI Regulatory Advisor at EY Global.

More updates to follow.

Tags: ,