The US Food and Drug Administration (FDA) is seeking industry comment on practical approaches towards measuring and evaluating the performance of AI-enabled medical devices in the real world.
With a feedback window open until 1 December, the agency is particularly interested in public comments outlining strategies to detect, assess, and mitigate performance changes over time to ensure that medical devices with an AI component on the US market remain safe and effective throughout their lifecycle.
Many AI-enabled medical devices marketed in the US are primarily evaluated through retrospective testing or static benchmarking. The FDA noted that while these approaches can help in establishing a baseline understanding of a given device’s performance, such measures are not designed to predict behaviour in dynamic, real-world environments.
The FDA added that ongoing, systematic performance monitoring is increasingly recognised as relevant to maintaining safe and effective AI use by observing how systems actually behave during clinical deployment.
As of this month, the agency has approved 141 AI-enabled medical devices in 2025, bringing the total approved devices to 1,250.
The FDA has already taken action to implement several initiatives, including its Predetermined Change Control Plans (PCCPs) to adapt and streamline the regulatory process for medical device manufacturers. This allows manufacturers under the PCCP to pre-authorise certain changes to AI algorithms post-approval, such as retraining or updates, without the need to make a new submission.
The FDA also has the Total Product Life Cycle (TPLC) approach, a policy that encourages employees to develop a longitudinal, integrated, broader, and deeper view of device safety, effectiveness, and quality.
Despite these policies being in place, the call for further public feedback suggests the FDA is keen to continue evolving its approach to ensure AI medical devices remain fit-for-purpose across various real-world applications.
Europe’s take on AI medical device regulation
While at this time it is unclear whether the FDA will move to shore up protections for AI medical devices before they reach the market, the EU has recently passed measures to assess conformity pre-market. Passed in August 2024, the EU AI Act for the enforcement of AI systems determined to be “high risk” comes into effect in 2026.
Under the regulation, independent third parties assess conformity and have to meet stringent data governance, transparency, and risk management standards.
Published in February in Nature Medicine, a research team, comprising authors from the Else Kröner Fresenius Center (EKFZ) for Digital Health and the UK’s Nuffield Department of Surgical Sciences at the University of Oxford, proposed that the integration of transparent and mandatory feedback collection mechanisms directly into the user interfaces of AI-based digital health tools (DHT) can be a viable way to track the safety and suitability of AI devices over time.
According to the paper, such measures could significantly improve user experience, increase patient safety by the early identification of any problems, reduce the administrative burden in monitoring tools, and heighten public confidence in AI-based devices.
“FDA seeks industry feedback on AI medical device safety monitoring” was originally created and published by Medical Device Network, a GlobalData owned brand.
The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.