In my experience as a nuclear medicine physician in a large hospital, Artificial Intelligence (AI) will help with the routine, time-consuming procedures of healthcare professionals. It can delineate tumours in the case of repetitive imaging and pre-digest work performed on earlier imaging. In these cases, I do not have to redo what has already been done over the course of the previous treatment of the patient. This allows me to simultaneously focus on the imaging interpretation and the actual impact on patient management.
Barriers to AI uptake
Healthcare professionals carefully evaluate the cost-effectiveness of AI tools that developers present to them. While a tool can seem expensive, it can still prove to be cost-effective, provided that it saves time and efforts. Conversely, if the quality of a professional’s reading or interpretation of imaging, or time spent on a study, is not improved by the tool, even inexpensive AI solutions will not be considered as they do not deliver on the main desired outcomes of their deployment.
A challenge in AI development in healthcare can be attributed to the fact that research activity is translated into clinical practice relatively slowly. We see a lot of tools available in the research environment, but they are not yet implemented in clinical practice because the AI tool lacks appropriate approvals and cannot be used clinically.
An important component of the implementation of AI tools in routine clinical practice is that they should go through a training and validation cycle. The need for on-site validation poses a challenge both in academic and non-academic hospitals since it requires time, resources as well as curated data sets for training and validation of AI algorithms. From a regulatory point of view, it is not allowed that a tool continues to learn from the hospital data it is being used on – the database becomes locked the moment the use approval is granted, which is a significant contradiction that needs to be addressed.
The AI-Healthcare Professional symbiosis
Healthcare professionals are now used to facing various AI solutions. What I would like to see in future AI health solutions is the tool providing additional information useful for the clinical management of a patient, offering more complex evaluations and characterisations of lesions, assisting in the decision of whether to keep the patient under their current therapy or change the treatment.
In terms of training and education, healthcare professionals should not only be informed about what AI can do but actually test it during hands-on workshops and on-the-job training. Understanding that current workflows may not always be compatible with optimal implementation of AI tools and training professionals on how to redesign their workflow so they can make optimal use of AI tools is often neglected. Sometimes very simple adaptations facilitate the workload. Additionally, patients need to be made aware and educated about what AI can do for them to improve their care. Finally, I would not expect the AI tool to fully take over the interpretation of the imaging, but it would be highly beneficial if it could sort out what characteristics are within normal limits and pinpoint abnormalities one might have missed.
A call to policymakers: Alignment between relevant legislative initiatives
It is key to avoid the same hurdles as healthcare professionals witnessed when the GDPR was first initiated. I hope any collateral damage can be prevented in the Artificial Intelligence Act, so its translation into national legislation requires careful attention. In addition, differences between the AI Act and the GDPR could pose serious issues, hampering the implementation of new tools.
My main wish is that patients and healthcare professionals are involved in the final decision-making of legislation, and in the development and oversight of implementation strategies.
Provided that appropriate safeguards are in place, patients are very motivated to share their data to improve health outcomes for themselves and future patients. Concern for some patients (and family members) whereby they are wary of machine-made decisions regarding medical care rather than human-made decisions need to be addressed and allayed. The effects, if any, of AI decisions on the shared-decision making model and process between the patients and their doctors need to be discussed with the patients and appropriate reassurances need to be made. Patients’ support of the AI Act is crucial, and it can only be achieved by involving them from the start. This is especially important when one considers who really owns data on patients: the patients. It’s their data!
Lastly, I would caution that the more unnecessary restrictions legislation includes, the more difficult it becomes to develop tools that help healthcare professionals and patients towards better diagnosis and treatment.
We need to advance our common efforts, based on a sense of trust, collaboration, transparent discussion and evidence. This is how we can strive towards a sustainable healthcare system that serves both today’s and tomorrow’s patients.