Should the EU rewrite the rulebook on liability in response to advances in artificial intelligence?

  • 3 minutes
  • Posted on 15.09.2022

Should the EU rewrite the rulebook on liability in response to advances in artificial intelligence?

Ulrich Juknat

Legal Director Regulatory Law

As lawyers we tend to use academic journals to tease out complex legal questions. (See our latest contribution to the literature here). However, today we’re taking the opportunity to discuss a hot topic with a broader audience, as we feel it will have a profound impact on the public, patients and health systems. That topic is the future of artificial intelligence (AI) in healthcare.

The reason this issue must be addressed now is that the EU is working on two new proposals on product liability that seem likely to affect AI and, potentially, could have a knock-on effect on innovation in healthcare.

First, it’s worth recalling the positive potential of AI to offer new efficiencies in how and where we deliver care, and to accelerate a second wave of digitalisation. AI promises earlier diagnosis and opportunities for more timely intervention that could improve patient outcomes. Clever new devices may even help to overcome the personnel shortages we face in the health and caring professions.

Of course, as with all classes of new technology, the need for public buy-in is essential to its adoption. One of the concerns that has recurred repeatedly over the years centres on the question of how our legal systems will cope with any damage that might be caused by new AI-based tools.

AI, medtech & liability

At the heart of this discussion is the EU Product Liability Directive (PLD). As the law is more than 30 years old, it strikes us as reasonable to ask whether it is up to the task. Of course, we are not the only ones thinking about this: the question is also at the heart of European Commission’s White Paper on Artificial Intelligence and the Report on the Impact of Artificial Intelligence, the Internet of Things and Robotics and Safety and Liability.

There are concerns in some quarters that the PLD would be ill-equipped to respond to AI products given their connectivity, autonomy and complexity. However, it is our view that some of these challenges – including automated decision-making, software as a product, cybersecurity and connectivity – have already been explicitly considered in the regulatory framework for medical devices.

The Medical Devices Regulation and the In Vitro Diagnostics Regulation are, of course, relatively new. Both were devised as AI products and potential new technologies were coming to market – even if no piece of legislation could ever truly keep up with the rapid pace of medical innovation.

In any case, the PLD itself is technology-neutral and has been applied to a wide range of innovative products over several decades, including medical technologies.

These conversations about AI and product liability have another potentially significant consequence. Talk of broader changes to liability law has become commonplace recently, with the Commission preparing a proposal to amend the PLD.

This may go beyond AI-related technologies and have far-reaching consequences for the delicate balance of EU and national liability law. For example, we are concerned that the burden of proof might be reversed, distorting the established distribution of risks. This in turn could lead to conflict with national civil liability systems.

Legal certainty and trust

The instinct to review existing legal frameworks in response to new waves of technological innovation is good. It is wise to consider whether consumers are protected, and innovation is encouraged, so that we can incentivise investment and foster societal acceptance of valuable new tools.

However, it is our view that changing the existing liability regulation is not necessary and would be contrary to the urgent need to support innovation. Intervention in the established legal framework comes with risks. We have yet to see a compelling case for re-opening the complex question of product liability in response to the perceived novelty of AI technologies.

Our view is that the PLD is sufficient to cope with current and future waves of innovation, just as it has since its introduction. In addition, the MDR has introduced strict requirements for medical devices which incorporate AI technologies. Where gaps remain, these can be discussed under the AI (Draft) Regulation.

By proceeding with due caution and a focus on stability, European policymakers can ensure that a balance is struck which continues to protect consumers while supporting the innovation our citizens and health systems need.

The comments are closed.