Legal liability for machine learning in healthcare

Who is responsible for any malfunction in software that results in inaccurate or delayed diagnosis?

By Johan Ordish

Policy briefing

This briefing note was produced as a part of our project on Regulating algorithms in healthcare.

Machine learning for medicine has the potential to change clinical practice. The technology promises to make the diagnosis of various conditions both quicker and more accurate. This technological shift might also compel us to reconsider aspects of clinical and product liability. Machine learning is not infallible and so the question of who should be liable for any malfunction that results in inaccurate or delayed diagnosis is one we will soon have to answer.

Summary

  • Clinicians or their employers are currently the primary defendants in actions for clinical negligence
  • Machine learning may challenge the existing legal position whereby clinicians are liable for software malfunctions that contribute to inaccurate or delayed diagnosis
  • There are concerns that machine learning could foster the growth of black box medicine, where clinical decision making becomes increasingly opaque
  • The legal uncertainty over whether machine learning satisfies the definition of a product is not unique, but is exacerbated by specific factors in relation to healthcare
  • With the new wave of machine learning for medical applications the current approach to clinical liability should be reconsidered

What is machine learning?

‘Machine learning is a technology that allows computers to learn directly from examples and experience in the form of data... [M]achine learning systems are set a task, and given a large amount of data to use as examples of how this task can be achieved or from which to detect patterns... It can be thought of as narrow AI: machine learning supports intelligent systems, which are able to learn a particular function, given a specific set of data to learn from’1.

With the implementation of machine learning into medicine, it is time to examine whether clinicians that have taken due care should be liable if an algorithm causes damage or loss to their patient. The expansion of machine learning in medicine could also exacerbate old ambiguities in product liability, leaving those that have suffered loss without any robust way to recover damages.

Black box medicine

While machine learning may change the practice of medicine for the better, one worry is that the introduction of such technology will foster the growth of ‘black-box medicine’2 – i.e. ‘the use of opaque computational models to make decisions related to health care.’  These models are opaque, not by design, but often by necessity because of the amount and complexity of the data used3. Moreover, techniques such as machine learning do not easily lend themselves to human concepts of explanation and significance. Machine learning outputs are typically probabilistic and sometimes inscrutable.

Example

While trained pathologists will be able to explain why they have identified some parts of a scan as of clinical concern, it may be unclear why a machine learning image-analysis algorithm has assigned significance to a particular finding/image.

Black box negligence

Unsurprisingly, clinicians are the primary defendants in actions for clinical negligence. If a patient is harmed by a faulty diagnosis, the most obvious response is to sue the clinician, or most probably their employer. This remains true even where software may have contributed to this faulty diagnosis. This is because in many jurisdictions, the case law has developed in relation to software used to support rather than make clinical decisions. 

Example

Where clinical decision support software is used to detect drug contraindications (e.g interactions between potential medications)4 if the software fails to detect a interaction, the clinician remains at fault in clinical negligence for any resulting and foreseeable injury, despite the apparent flaw in the software. This is on the basis that ultimately, the decision to prescribe still remains with the clinician and so the clinician is at fault.

With the new wave of machine learning for medicine we must reconsider the current approach to clinical liability. Near-use machine learning applications (e.g. image analysis algorithms) for medicine will assist, not replace clinicians. However, future machine learning software is likely to go beyond current applications in being more sophisticated than their manually programmed counterparts. 

Example

Risk prediction tools are likely to utilise machine learning in the future. To predict risk, they will process large volumes of data to generate predictions. These predictions may be highly accurate, but the precise reason why one patient is deemed to be at higher risk than another may remain hidden. The uptake of inscrutable algorithms should make us question whether clinicians that have interpreted machine learning outputs with due care should be held liable for improper diagnosis that stems from machine learning malfunction.

Product liability

Typically the law lets ‘loss lie where it falls’ , meaning the claimant goes uncompensated unless it can be shown that the defendant was ‘at fault’, meaning they acted unreasonably. However, there are some special situations, such as product liability, where legislation stipulates that a defendant can be liable to compensate another person’s loss even if they were not at fault. This is a form of strict liability. Product liability is not concerned with a defendant being held liable for doing something wrong, instead it is concerned with holding a defendant liable because something has gone wrong5

This form of liability is entrenched in the Consumer Protection Act 1987 (CPA) and its associated Directive. Defendants who put ‘defective’ products into circulation can be liable for the damage they cause. This gives those injured by defective products another way to recover damages other than suing for breach of contract or negligence6. The advantage is that the claimant does not have to prove that the manufacturer was at fault for the damage (as in negligence) or that they had a contract. Consequently, product liability provides an extra avenue for those injured by products to sue manufacturers and sellers.

Software as a product

Machine learning is progamming technique that may be incorporated into software. For a claim in product liability, there must be a product.  ‘Product’ and the closely-related term ‘goods’ are given various definitions in legislation. ‘Product’ can be defined as ‘any good or electricity’ or as any ‘moveable’7.  ‘Goods’ is defined as ‘including all substances’8. While all of these definitions are vague, both the CPA and the Directive point to a definition of ‘product’ that primarily includes physical items. Given this, software under the CPA may echo the interpretation of software under the Sale of Goods Act 1979, where software does not count as ‘goods’ but the disk containing software does. This is an awkward situation, out of step with increasing digitisation and the advent of cloud computing.

Broadly, there are two ways software might constitute a product. Uncontroversially, if software is a component in a wider physical product, there will be a good claim against this composite product. For example, if software incorporated by the manufacturer into a blood glucose monitor malfunctions, the manufacturer of the composite product will be liable for that malfunction. Controversially, standalone software - software not incorporated into a wider product - might count as a product, but it is contentious whether software itself counts as a product for the purposes of product liability9.

Machine learning as a product

The legal uncertainty over whether machine learning software satisfies the definition of a product is not unique. However, three features might exacerbate this legal uncertainty when using machine learning for health: 

  • First, given the potential opacity of future machine learning software, proving fault in a regular negligence claim may be difficult. Perhaps the only feasible avenue available to many claimants will be a claim in product liability
  • Second, if machine learning software is incorporated into some products yet does not qualify as a standalone product, product liability claims may only be available to damage caused by the former
  • Third, if a machine learning algorithm leads to a faulty or delayed diagnosis or treatment, the liability for any resulting injury may be extremely costly

The combination of these three elements means that claims in product liability might be increasingly necessary to bring a successful case, but inconsistently actionable, despite pressure from claimants who have suffered significant harm.

Liability and healthcare

Machine learning may challenge the existing legal position where clinicians are liable for software malfunctions that contribute to improper diagnosis. Moreover, the uptake of machine learning suggests that we should revisit whether manufacturers and sellers of software should be strictly liable under product liability law. 

How we deal with these questions of legal liability will affect the cost and spread of the technology in the health sector. Moreover, any scheme of liability will incentivise or disincentivise certain behaviour – ultimately shaping the practice of medicine. 

References

  1. Machine learning: the power and promise of computers that learn by example. The Royal Society. April 2017.
  2. Price, W N. Black-box Medicine. Harvard Journal of Law & Technology; 2014. 28(2): pp. 419-468.
  3. Price, W N. Artificial Intelligence in Health Care: Applications and Legal Implications. The SciTech Lawyer; 2017. 14(1): pp. 1-7.
  4. Miller, R A. Legal and regulatory issues related to the use of clinical software in health care delivery. In Clinical Decision Support: The Road Ahead; 2007. pp. 423-444.
  5. McBride, N.J, Bagshaw, R. Tort Law; 2015, pp. 393-396.
  6. Stapleton, J. Product Liability; 1994, pp. 37-38.
  7. Section 1(2)(c) Consumer Protection Act 1987 and Article 2 Directive 85/374/EEC.
  8. Section 45(1), Consumer Protection Act 1987.
  9.  For example, it is unclear whether risk prediction tools themselves will constitute a product under the 1987 Act.

 We would like to thank Dr Kathy Liddell and Dr Jeffrey Skopek of The Cambridge Centre for Law, Medicine and Life Sciences for their work on this briefing note.

Genomics and policy news

Sign up