It is no secret that the AI regulatory landscape has experienced monumental shifts in the last six months. Rhetoric that views regulation as a barrier to AI innovation and the cross-sectoral opportunities it offers, including for health and life sciences, has grown stronger.
Since last autumn, there have been murmurings that the US was to change tack on its approach to AI regulation and since then President Donald J. Trump has reversed many of the Biden administration’s steps towards specific regulation of AI. The EU also unexpectedly abandoned its efforts to adopt the proposed AI Liability Directive and has introduced a programme of work, the “2025 Commission work programme”, which suggests it may also begin to adopt a more innovation-friendly approach.
The UK too, has adopted pro-innovation rhetoric in its AI policy making. However, its regulatory approach has been characteristically different to that in the EU and US: neither creating AI specific regulations (like the EU) or taking a strong deregulatory stance to foster innovation (like the US). Instead, it has typically sought to revise and amend sector-specific existing legislation or guidance to support AI developers and protect the public, as well as introducing measures that bolster regulators’ enforcement capacities.
Yet the UK’s approach is not without its challenges. Revising and applying existing regulation to AI must be done carefully to avoid conflicting regulatory burdens or duplications. It also requires ensuring that regulators across sectors have the capacity and expertise to assess and respond to the novel risks posed by AI technologies. Coordination is essential to prevent harmful developments from ‘falling through the cracks’ of these different regulatory regimes.
At the PHG Foundation, our work has therefore focussed on how existing law and regulatory mechanisms can better apply to AI. Rather than multiplying frameworks, what is needed is better feedback on how these frameworks are being implemented and how we can improve their use in practice.
Governance and liability in medical AI: Who’s responsible when AI goes wrong?
As AI tools in healthcare begin to outperform humans in certain tasks, striking the right balance between AI autonomy and human oversight is becoming increasingly complex. Questions of responsibility and liability, especially when harm occurs, are now front and centre for regulators, developers, and clinicians alike.
To unpack these challenges, the PHG Foundation and the Centre for Law, Medicine and Life Sciences (LML), as part of the Inter-CeBIL Programme, explored how multiple regulatory regimes intersect – including medical device law, the EU AI Act, data protection, and negligence and product liability law. Our November workshop, brought together experts to discuss how responsibilities and liabilities are currently allocated across developers, deployers, and users of medical AI – and where the gaps and overlaps lie. The discussions generated a wealth of insights, and a set of pressing questions for policymakers, researchers, and regulators working to build a clear and coherent framework for accountability in medical AI. Stay tuned for our forthcoming briefing note and continued work in this area.
Synthetic health data
Synthetic data have significant potential to advance the development and deployment of Artificial Intelligence as a Medical Device (AIaMD), particularly in areas where real-world data are limited, sensitive, or costly to obtain. While existing guidance provides a strong foundation, the use of synthetic data – especially when it forms a central part of the evidence base in regulatory submissions – requires additional clarity.
Recognising this, we are actively collaborating with partners who are developing guiding principles to facilitate the dialogue between manufacturers and approving bodies. The hope is to offer a structured approach to help manufacturers consider, compile, and justify their use of synthetic data in AIaMD development. This supports our overarching aim to help ensure that regulatory expectations keep up with the speed of innovation and reinforce responsible use of synthetic data for the benefit of patient health.
Challenges for the post-market surveillance of medical AI
AI also presents challenges for the regulation of medical devices once they are in use due to the potential for changes to occur that affect their safety and performance. The innovative and collaborative approach that the MHRA has adopted through its AI Airlock programme draws on real world products and multidisciplinary expertise to identify and address novel regulatory challenges posed by AI. To support this, we ran a workshop in March which brought together clinical, technical, policy, legal and regulatory experts to generate understanding and ideas around monitoring and reporting needs for AI as part of post-market surveillance of medical devices.
Important themes running through the projects
- Agile regulation and international alignment are key: with AI capabilities accelerating faster than our current safety mechanisms can adapt, regulators must be prepared to respond dynamically. This includes not just national efforts, but greater regulatory alignment across borders. Governments must find more common ground, especially where AI intersects with high-stakes domains like health. The UK’s decision not to sign the international agreement on AI at the 2025 Global AI Summit in Paris implicitly heightens the urgency of avoiding regulatory fragmentation.
- Regulatory and ethical concerns have not disappeared: they are simply not the dominant narrative at present. While previous summits focused heavily on safety, this year’s leaned strongly toward innovation. We expect, however, that with this renewed emphasis on innovation, the associated risks will soon re-emerge more visibly—pushing us back toward a more balanced position that takes safety concerns seriously. Especially in the context of health, these issues cannot be ignored. Maintaining public trust is essential to avoiding setbacks and anticipating potential harms is critical to preventing them.
- Resourcing: It is important to ensure that regulators have the resources and expertise to keep pace with the rapidly evolving technologies that fall under the umbrella of “AI”. Better coordination is needed– both across regulatory domains and at the international level—to create coherent, responsive, and future-ready governance. Recent global shifts signal a move towards pro-innovation approaches, but this must not come at the cost of safety, public trust, and ethical oversight, especially in high-stakes areas like health.
At the PHG Foundation we are working with a range of partners on this delicate balancing act, to promote responsible adoption of AI technologies that are not only innovative but also effective, safe, and truly improve patient outcomes.
- Delivering diagnosis for prenatal medicine – lessons from implementation of a national service
- Metagenomic sequencing in public health – can long read sequencing be used to tackle antimicrobial resistance?
- Data: the catalyst for health and climate action
- Policy, governance, ethics: refreshing the PHG Foundation’s strategic focus