UK Parliament POSTnote – Interpretable machine learning
Dr Ansgar Koene, Senior Research Fellow at Horizon Digital Economy Research is acknowledged for his contribution towards the development of the UK Parliament POSTnote number 633, Interpretable machine learning.
- Machine learning (ML) is being used to support decision-making in applications such as recruitment and medical diagnoses.
- Concerns have been raised about some complex types of ML, where it is difficult to understand how a decision has been made.
- A further risk is the potential for ML systems to introduce or perpetuate biases.
- Approaches to improving the interpretability of ML include designing systems using simpler methods and using tools to gain an insight into how complex systems function.
- Interpretable ML can improve user trust and ML performance, however there are challenges such as commercial sensitivity.
- Proposed ways to improve ML accountability include auditing and impact assessments.
Link to POSTnote: Interpretable machine learning