Artificial Intelligence (AI) based systems are prominently reshaping our society: technologies such as ChatGPT have wide implications on our daily lives. For medical technology companies, the ability to incorporate AI into their products will be essential, but the practicalities of doing so often appear intransparent.

A large part of these uncertainties arise from a lack of regulatory guidance: the novelty of medical AI, combined with regulations such as the Medical Device Regulation (MDR) which do not mention AI, lead to a perceived large grey area. In addition, upcoming regulations such as the EU Artificial Intelligence Act add more rules on how AI can or cannot be integrated in our daily lives, but is not specific to medical devices. The implications of the combination of the MDR and the AI Act are widely discussed in the medtech community, often painting a dark picture of what is allowed for a medical AI and what is not.

In this article, we will not take a look at the regulations, but at the implementation, because it clearly is possible, and widely accomplished, to bring the benefits of AI to medical software. The medtech community needs a pragmatic, data- and software-centric approach to medical AI—because at the end, AI is just that: a special kind of software that is able to learn statistical features of existing data labeled by human experts ("training the AI") and then make predictions on new data, “mimicking” the human experts. Typical medical applications are the detection of tumors in computer tomography images, or detecting atrial fibrillation in electrocardiograms.

AI and Patient Risk

In medical technology, including medical AI, our main concern is patient risk. The key step where risk to the patient arises in medical AI occurs during the prediction phase: Here, the AI predicts potential medical outcomes based on the data it has been trained on. However, no AI system is perfectly accurate—imagine a scenario where a patient has a tumor or displays signs of atrial fibrillation in their ECG, but the AI fails to detect it. The aftermath of these overlooked diagnoses can of course be devastating and the chance of them occuring must be reduced. Now, on the other hand, imagine a scenario where a patient does not have a tumor or does not display signs of atrial fibrillation, but the AI falsely detects it. The consequence of these false-positive diagnoses might be unnecessary follow-up tests and treatments, leading to high stress to the patient. To this end, the AI’s sensitivity and specificity must be increased—it’s the key to enhancing the safety and effectiveness of our medical AI.

Data Collection and Privacy Concerns

The key to a safe and effective medical AI—and the hardest and most expensive part of a medical AI development—is to acquire appropriate data to train, test and validate on. This is no easy feat even for non-medical AIs, and many development projects fail at this point: if the data is not abundant enough or does not properly reflect the AI’s use case, the AI will never meet its user’s expectations. For medical AI, the requirements are even harder to accomplish:

  • The ubiquitous topic of patient privacy and consent must be taken into account
  • The data must reflect the intended patient population across many parameters such as demographics, diagnoses and the region where the AI will be rolled out
  • The data must be properly stratified
  • The data must be labeled by experts, i. e. physicians
  • The dataset must have an appropriate sample size to be able to prove that the AI works as intended
  • and many more

But how do we actually develop a medical AI?

Steps to Develop Medical AI

These eight steps summarize the process, but assume that you already have an ISO 13485-compliant Quality Management System (QMS) with the appropriate processes to develop a medical software in place:

  1. Develop and validate your product idea, determine the intended use of your product.
  2. Determine the quality criteria for your AI: What are acceptable levels for accuracy, sensitivity and specificity? Take a risk-based approach. Start with literature, existing products and common sense to find the answer.
  3. Develop a data strategy: Which data are needed? How many do you need? Where do you obtain the data? Which group of experts can label your data? Which IT systems do you need for all of that? How do you split your data into training, test and validation data?
  4. Build a validated data pipeline, collect and label your data. Ideally, most of that pipeline should run automatically, including AI training.
  5. Start developing your medical AI as soon as the first data are coming in. Make sure to follow the relevant software standards such as IEC 62304 from the very beginning on.
  6. Start developing the software system around the AI (such as the user interface and the business logic) as soon as you are certain your AI can reach the quality criteria.
  7. Go through software verification and validation to prove the AI and the surrounding software system reach the determined quality requirements.
  8. Go through the approval process.

These steps assume that you are already experienced in developing Software-as-a-Medical-Device (SaMD). Behind each of these steps lie many considerations we must take into account—some are learned painfully during a medical AI project, some can be made beforehand through existing materials. The most important material, next to the relevant standards, is in my opinion the IG-NB questionnaire “Artificial Intelligence” that provides many questions that assist in developing a compliant medical AI (as of this writing in version 4). Use this questionnaire from the start of your development to get and stay on track.

Take the Leap!

The most important step is: Start! Medical AI is doable and very well within the capabilities of every medical technology company.

Interested in developing a medical AI?

Contact me for more information. I’m looking forward to working with you.