Mehr Infos

AI & Usability in Medical Devices Series – Part 2: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (FDA)

Author: Benjamin Franz

Reading time:

Mar 2025

Artificial intelligence is changing the face of medical technology, from diagnosis to treatment planning. But with new capabilities come new challenges.

For example, usability is becoming an increasingly important design consideration. As AI systems become more difficult for humans to understand, user interfaces must compensate. Manufacturers of AI-based medical devices must therefore develop transparent, understandable and reliable systems that meet both regulatory requirements and the needs of users.

This is where the FDA’s draft guidance, “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations” (Draft 2025) provides guidance for manufacturers. Reason enough to take a closer look at the guidance document.

In this article, we analyze what requirements can be derived from the draft and what the practical implications are for manufacturers. Our focus is on usability and the user interface (UI).

This article is part of our series “AI & Usability in Medical Devices”, in which we examine the most important AI guidelines and highlight their impact on the usability of medical devices. Each part of the series is dedicated to a specific regulatory guideline. We will also summarize the collective findings in a comprehensive overview (to be published).

Let’s get started!

Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations

The FDA, as one of the leading regulatory agencies in the field of medical technology, has been developing guidelines for the safe and effective use of artificial intelligence (AI) in medical devices. The draft “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations,” published in January 2025, aims to provide clear recommendations for manufacturers of AI-based medical devices.

Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations

The central topic of this new guidance document is the marketing submission for AI-enabled medical devices (e.g., 510(k), De Novo, and PMA). The FDA provides detailed requirements for the documentation that manufacturers must provide to demonstrate the safety, performance, and usability of their AI models.

It takes a life-cycle approach to ensure that manufacturers consider not only the development, but also the long-term monitoring and continuous improvement of their AI systems.

The full 64-page document is available here: Artificial Intelligence-Enabled Device Software Functions 2025 Draft

Implications on Usability & UI

Below are highlighted sections of the FDA guidance document “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations” that define specific requirements for usability and human-AI interaction in medical devices.

1. User-centered transparency and explainability of AI

Original quotes:

“A comprehensive approach to transparency and bias is particularly important for AI-enabled devices, which can be difficult for users to understand due to the opacity of many models and the models’ reliance on data correlations that may not directly map to biologically plausible mechanisms of action.”

“While sponsors of devices that do not have a critical task may not need to submit a human factors validation testing report, they may choose to use the process outlined in the Human Factors Guidance, or another approach of their choosing to evaluate usability, to test their device design, and support the efficacy of risk controls.”

Traditional medical devices must ensure that their operation is functionally understandable. However, they are not required to explain the underlying algorithms. The FDA takes a different view of AI products. The draft specifically calls for transparency mechanisms so that users can understand how the AI made a decision.

Implications on Usability & UI:

  • AI outputs must be clearly understandable. Ideally, there should be an option for users to retrieve further contextual information.
  • UIs should include visualization strategies for AI decisions (e.g. probability values, uncertainty ranges, etc.). This should clearly communicate how reliable a result is.
  • It is recommended to include so-called “model cards”. These are a kind of instruction manual for AI products, including information about training data and limitations.
  • Usability tests are recommended to validate the comprehensibility of the results and the transparency measures.
  • It is recommended that the process of designing the user interface follows a holistic approach: from the context of use, to the identification of tasks, to the actual design.

2. Dynamic decision making and control of results

Original quotes:

“It is important to know how robust the device output is due to potential variations in the measurement system (e.g., whether repeated tests by users will generate significantly different device output due to operator difference and signal variation).”

“Together, performance validation and human factors validation (or an evaluation of usability as appropriate) help provide FDA with information to understand how the device may be used and perform under real world circumstances.”

Traditional medical devices often have predefined algorithms. They deterministically produce the same result for the same input data. The guidance document states that this is not necessarily true to the same extent for AI products.

It directly addresses the robustness and reproducibility of AI models and, depending on the device, requires verification studies. This also has usability and UI implications.

Implications on Usability & UI:

  • Users need to understand which results are dynamically generated by the AI and which are not.
  • Users should understand how the AI is used to produce the results (e.g. via the model cards mentioned).
  • Manufacturers must demonstrate that users are able to interact with and understand the device as intended.
  • It is mentioned that it may be difficult for users to control the results (see also transparency in the first point). Therefore, evidence may be required here as well. This is especially true for learning models or updates.

3. User and AI form a team

Original quote:

“The intended use and clinical workflow of AI-enabled devices span a continuum of decision-making roles from more autonomous systems to supportive (aid) tools that assist specific users, but rely on the human to interpret the AI outputs and ultimately make clinical decisions.”

The draft refers to the Good Machine Learning Practice: Guiding Principles. We have already discussed the GMLP in our series. The original can be found here. In GMLP, the user and the AI are seen as an “AI team” working together to accomplish a task.

Implications on Usability & UI: (taken from our GMLP article)

  • Usability validation must shift from UI design to human-AI collaboration – regulators will expect manufacturers to assess how well users understand and interact with AI outputs in real-world workflows.
  • New usability metrics will be required – beyond traditional effectiveness and efficiency measures, AI usability must now consider trust in AI decisions, interpretability of model outputs, and human-AI decision-making efficiency.
  • Error mitigation strategies must be usability-tested – regulatory focus will extend beyond AI accuracy to how users recover from errors and how AI designs minimize automation complacency and over-reliance.

4. New risk control challenges

We wrote in point 1 that users may have problems understanding how AI works. This creates new risks that must be controlled.

Original quote:

“One aspect of risk management that can be particularly important for AI-enabled devices is the management of risks that are related to understanding information that is necessary to use or interpret the device, including risks related to lack of information or unclear information. Misunderstood, misused, or unavailable information can impact the safe and effective use of a device.”

Implications on Usability & UI:

  • The new risks must be fully considered in the risk analysis. The guidance emphasizes human-AI interaction and user understanding in several places.
  • Usability evaluations are recommended to verify related control measures. In Appendix D, “Considerations for Usability Evaluation,” these considerations are explicitly applied to products that do not require a human factors validation test report (see quote in Section 2).

Discussion: Usability as a Critical Factor for AI-Based Medical Devices

The FDA guidance document “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations” makes it clear that usability is becoming not only a quality feature, but also a key regulatory factor for AI-enabled medical devices. While usability in traditional medical devices is primarily concerned with safe and efficient operation, the FDA is now setting new requirements for transparency, interpretability, and human-AI collaboration.

A key point is the requirement for comprehensible AI. Users must not only understand how they interact with the system, but also how the AI produces its results. This means that manufacturers must develop new UI concepts that make AI results explainable. This includes visual transparency mechanisms such as model maps, uncertainty displays, or alternative decision paths.

Another key topic is human-AI collaboration. The FDA emphasizes that AI systems cannot be considered in isolation, but rather as part of a human-machine team. This requires new usability validations that evaluate not only the user interface, but the entire interaction and decision making between user and AI. For learning models and adaptive algorithms in particular, this means that manufacturers must prove that users understand and can safely interact with changes in the AI.

Risk management for usability issues is also becoming more important. The FDA specifically requires that misunderstandings or lack of information be assessed and addressed as risks. Manufacturers must ensure that users can interpret the AI correctly, identify errors, and respond appropriately. This makes usability testing even more important.

This is still a draft regulation, so the requirements may evolve. But the direction is clear: usability will be an integral part of the regulatory evaluation of AI-based medical devices in the future. Companies should prepare early and ensure that their systems are not only powerful, but also transparent, understandable and safe to use.

What’s next?

0 / 5 (0)

Subscribe for our newsletter

E-Mail *
This field is hidden when viewing the form
This field is for validation purposes and should be left unchanged.

Related Posts

Deep Dive: Usability Engineering File

We dug deep into the topic of Usability Engineering File and came back for you with all the important info. In this article, you will learn the answers to the following questions: What is a...

read more