Erasmus MC and TU Delft open first healthcare AI-ethics lab
Staff shortages and the constant desire to provide high-quality medical care. These are only two of the most important reasons for a sharp increase in the application of artificial intelligence (AI) in healthcare in the coming years. By launching the first healthcare AI Ethics Lab, Erasmus MC and TU Delft focus on ethically responsible and clinically relevant AI that will positively impact both patient care and healthcare workers.
In the future, will doctors discontinue medical treatment based on information provided by a computational model? This may be one of the most difficult questions regarding the application of AI in healthcare. But there are many more and less formidable questions. For example, whether it will be safe for a patient recovering from surgery to be discharged a few days earlier – a decision that both benefits the patient and frees up hospital resources.
More time
AI can use data and computer models to predict patient outcomes, making the healthcare system more efficient. Co-initiator Diederik Gommers: “This enables the medical staff to devote more time to the patient, which ultimately ensures that staff does not leave, but rather remain in the healthcare field: less workload, better quality.”
WHO
It is imperative for the underlying AI models, that support doctors in making such medical decisions, to provide ethically responsible recommendations. “The World Health Organization has identified six core principles for AI in healthcare, such as a clear allocation of responsibilities and ensuring fairness and applicability for each individual patient,” says Stefan Buijsman, Assistant Professor of Ethics at TU Delft. Jeroen van den Hoven, director of the TU Delft Digital Ethics Centre, contributed to the WHO AI-principles. “The major challenge is that, oftentimes, it is not self-evident what it means for an AI-model to be fair and how you can guarantee such fairness.”
Safe and demonstrably beneficial
The Responsible and Ethical AI in Healthcare Lab (REAiHL), a collaboration between Erasmus MC, TU Delft, and software company SAS, aims to answer these questions. “The clinical expertise of Erasmus MC is in the lead – they provide the use cases and will be the ones applying the AI models in clinical practice,” Buijsman says. “For more than two decades now, TU Delft has been at the forefront of digital ethics – how to translate ethical values into design requirements for engineers.” In addition to responsible design, TU Delft will also play an important role in demonstrating the clinical added value of developed AI models.“On the one hand, this involves demonstrating the positive impact on patient care,” says Jacobien Oosterhoff, Assistant Professor of Artificial Intelligence for Healthcare Systems at TU Delft. “Much is already known about how to safely test rockets to mars in a remote area. But there still are many open questions when it comes to safely testing AI for patient care. A second focus is the effective integration of AI-models into the clinical workflow, ensuring that doctors and nurses feel truly supported. Our new lab is dedicated to answering these open questions. Our approach, involving doctors, engineers, nurses, data scientists, and ethicists, provides a unique synergy.”
A hospital-wide framework
“Initially, the new AI Ethics Lab will focus on developing best practices for the Intensive Care Unit,” Buijsman says. “But our ultimate goal is to develop a generalized framework for the safe and ethical application of AI throughout the entire hospital. We therefore expect to soon start addressing use cases from other clinical departments as well.”
Pictured are (from left to right) Phaedra Kortekaas (SAS), Diederik Gommers (Erasmus MC), Antonie Berkel (SAS), Reggie Townsend (SAS), Jacobien Oosterhoff (TU Delft), Michel van Genderen (Erasmus MC), Stefan Buijsman (TU Delft), Jeroen van de Hoven (TU Delft).
Read these stories on the websites of Erasmus MC and TU Delft.