News

Hidden layers – exploring the values shaping our digital future

Written by Vincent Bryce

Diagram of a goalline technology system. Licensed under CC-BY-SA 3.0 (attribution: https://commons.wikimedia.org/wiki/User:Maxxl2)

The idea that there should be a “human in the loop” for important decisions has become an axiomatic assumption of many commentators on responsibility and digital technology, and a rallying cry for human-centred AI narratives.

But what are the motivations and values of humans with roles in technology-enabled decision making? Does their involvement enhance, or reduce a system’s inherent fairness? And how we do get not just humans, but “humanity in the loop”?

Recent media coverage has highlighted how seemingly-highly-automated systems may include human interventions, and how these can misalign the system with broader stakeholder perceptions of fairness.

Unexpected items in the bagging area

The concept of the “Video Assistant  Referee”, or VAR, has increased in prominence in the world of football, with the promise of using technology and automation as a seemingly objective method to judge game-deciding decisions, such as whether match balls have crossed the goal line. However, behind the VAR sits a human referee, and when a decision seems to go against what may be perceived by stakeholders as fair and impartial standards – the role of this ‘human in the loop’ is called into question, and attention is drawn to the possibility of bias.

The Post Office Horizon scandal illustrates how the ability of management operators to intervene in ostensibly automatic processes poses fundamental questions for the claimed fairness of decisions, based on data from the underlying system. The saga also illustrates the potentially life-changing impacts of technology-enabled decisions on individuals, and the need for greater reflection on the limits of technology and management responsibility for consequent decisions.

The world of Human Resources provides examples of where purportedly scientific methodologies, brought in with the promise of reducing individual fiat, can introduce biases of a different kind which may be less obvious, but lead to decisions which are just as unfair. The cases of Amazon’s algorithmic talent management , and HireVue’s automated video interview based recruitment, show us not just the risk that machine learning systems can incorporate a range of biases inherent in the data used by developers to train them, but may go beyond this in creating a veneer of objectivity over management decisions which remain inherently subjective.

Visualising hidden bias

The ’hidden layers’ of ‘development’ involved in the development of machine learning based systems – in both the technical sense, and in the sense of the underlying design decisions which contribute to how they operate – are increasingly complex, can be hard to understand, and are often wrapped in glamourous sales pitches as to the transformative effects of generative AI and data science.

One way to understand the potential for hidden bias, and the inherent decisions of developers which shape these systems, is in reviewing the results of AI-generated imagery. Depending on the platform, searches for ‘football player’ will typically yield male characters, reflecting the inherent bias in the data generated by society on a given topic. As a result, the ability to promote a system in a way that returns appropriate results may be highly dependent on the operator of the system, and in turn on the perceptions of fairness in the audience(s) they are working with.

While mitigating this bias is an area of very active academic and industry exploration, recent challenges experienced by Google’s Gemini project indicate it is a difficult issue to address (and that addressing may involve confronting societal stakeholders about current, and historical inequality).