A right to explanation?

How the GDPR requirement for a right to explanation has consequences for the use of machine learning in healthcare

Discussion paper

Machine learning is a form of AI that combines massive datasets and advanced processing power with statistical models to develop a system that 'learns' how to perform a specific task, without explicit instructions. In healthcare machine learning has many potential applications, though its use raises a number of concerns.

The fundamental basis of all machine learning is the data upon which the system is trained, and the processing of data is regulated. Within the EU, data that counts as personal may be regulated under the General Data Protection Regulation (GDPR), and one of the most contentious elements of the Regulation is the right to explanation. The very existence of this right, how it is to be interpreted and how it might be satisfied is still subject to intense debate.

This paper outlines the arguments for a right to explanation, and other mechanisms the GDPR provides that might require explanation of machine machine learning models and their outputs, and how the requirement for a right to explanation has consequences for the use of machine learning in healthcare.

Key points

  • Machine learning is a promising healthcare technology, but presents issues as some models may be opaque black boxes
  • Article 22 of the GDPR contains a specific right to explanation. Only a subset of machine learning for healthcare will trigger this narrow right
  • How the right to explanation and transparency requirements will apply to machine learning remains unclear

This work was supported by the Wellcome Trust, grant number: 213623/Z/18/Z

By Johan Ordish, Alison Hall

Genomics and policy news

Sign up