The Transformative Power of Artificial Intelligence in Healthcare
Artificial Intelligence (AI) can change many industries, such as healthcare. It can help healthcare professionals with tasks, improve patient outcomes, and advance medical knowledge. When AI is used in medical devices and in vitro diagnostics, it must follow specific rules under the Medical Devices Regulation and the In Vitro Diagnostic Medical Devices Regulation (MDR/IVDR). These laws have made it possible for AI-powered medical technologies to be safely available in Europe for a long time.
Building a Regulatory Framework for Ethical and Innovative Healthcare
In the fast-changing world of healthcare, it is essential to have rules that encourage new ideas and inventions while ensuring that care and diagnoses are made safely and fairly. The proposed AI Act is designed to do this with existing and upcoming legislation. These include the General Data Protection Regulation (GDPR), the Cyber Resilience Act, Data Act, European Health Data Space Regulation, and the revised Product Liability Directive.
One challenge with the AI Act is that it might conflict with the requirements of the MDR/IVDR. To solve this, the European Parliament wants to remove any conflicting obligations by considering high-risk AI systems compliant if they are already considered as such under sectoral legislation. This way, the regulations can be streamlined and ensure that AI technology in healthcare aligns with safety standards. MedTech Europe agrees with the European Parliament and suggests that the rules should also consider the existing conformity assessment procedures and the notified bodies, which are integral elements to ensure that AI-enabled medical devices meet the standards set by sectoral laws.
Enhancing Clarity and Governance: Key Considerations for the AI Act from healthcare stakeholders
Enhancing its legal clarity and governance structure is important to ensure that the AI Act is as effective and practical as possible. Several recommendations in this regard have been highlighted in a joint statement by healthcare stakeholders, including patients, healthcare professionals, and the medical technology industry.
The signatories positively highlighted the clear definition of risk by the European Parliament, which is crucial for assessing the impact and safety of AI systems. The narrower definition of “AI systems” introduced by the Council of the EU and the European Parliament aligns with international standards, especially those set by the OECD. This alignment promotes a consistent approach to regulating AI worldwide, making it easier for different countries to work together.
One important focus area is having clear definitions in the AI Act. The European Parliament changed the term “user” to “deployer,”. This change needs further clarification to divide responsibilities, as “affected person” was also introduced. If interpreted correctly by the signatories, this addition would refer specifically to patients and lay people.
To avoid differences between countries in the EU, it is important to apply and implement the AI Act’s rules uniformly across all Member States. The proposed AI Board or AI Office is seen as a positive step in supporting Member States in implementing and enforcing the laws. The signatories emphasise the importance of involving stakeholders in these governing structures. Regular participation in advisory groups or forums will enable informed discussions, accountability and ensure that the regulations meet real-world requirements and perspectives.
The AI Act has the potential to guide how artificial intelligence is used responsibly and ethically in Europe. By listening to the concerns of different healthcare groups and considering their suggestions, the Act can be even better. Clear definitions, fair risk evaluations, flexible data handling rules, and inclusive decision-making processes are all important parts of the Act. These elements will help regulate AI systems effectively, encourage new ideas, and protect people and society.