Navigating AI and cybersecurity in the rapidly evolving world of SaMDs

Ciber security

Author: Yervent Chijian, Director, Team Lead Medical Devices / IVD, Australia at PharmaLex
Advances made in software as or in a medical device have led to two major trends: artificial intelligence (AI) and its potential to transform healthcare and, on the flip side, an increase in cybersecurity vulnerabilities.

Both these developments have prompted regulatory authorities across many jurisdictions to tighten oversight and issue more comprehensive and streamlined guidance.

Regulatory approaches to AI

In the area of AI, and machine learning in particular, there have been widespread efforts to have a more harmonized approach to regulatory considerations.

Three regulatory authorities – the US Food and Drug Administration (FDA), the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) and Health Canada – published the Good Machine Learning Practice for Medical Device Development: Guiding Principles. The 2021 joint publication provides 10 overarching principles or practices that manufacturers should take note of when including AI/ML in their device software. Other regulators, such as Australia’s Therapeutic Goods Administration (TGA), regularly refer to this document when conducting premarket reviews.

The FDA is widely regarded as a leader in AI and ML medical device software regulation, for example, creating a Digital Health Center of Excellence to address these types of products. The agency has also published a discussion paper on a framework for modifications to AI/ML-based SaMD leveraging IMDRF risk categorization principles.

The paper introduced some invaluable concepts to address issues related to AI and ML devices. One is a predetermined change-control plan to be included in the pre-market submissions whereby manufacturers need to provide anticipated modifications and any associated methodology used to implement those changes. In April 2023, the FDA released a draft guidance document titled: Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions for industry comments. The guidance aims to provide additional details on the approach to support iterative improvement through modifications to ML software while continuing to provide a reasonable assurance of device safety and effectiveness, through the establishment of the Predetermined Change Control Plan (PCCP).

The FDA also introduced transparency into real-world performance through which manufacturers provide periodic updates to the FDA. Key areas of focus for the FDA are real-world performance monitoring, good machine learning practices, a patient-centered approach to transparency, and regulatory science methods related to algorithm bias and robustness.
The MHRA is also seeking to be a leader in this area and recently announced changes to its Software and AI as a Medical Device Change Programme, which it says is designed to provide a regulatory framework that both protects patient and the public while establishing the UK as a center for medical device software innovation.

Other jurisdictions, including China, Korea and Singapore, have released various standards and guidelines specifically around AI in recent years.

In Europe, a proposed Artificial Intelligence Act seeks to achieve harmonized rules “for the development, placement on the market and use of AI systems” through a risk-based approach. There are, however, concerns that act could conflict with areas within the Medical Devices Regulation (MDR) and the In Vitro Diagnostics Regulation (IVDR) with regard to benefit-risk assessment, post-market monitoring and traceability across the distribution chain.

More broadly, the hope is that standards organizations and the International Medical Device Regulators Forum (IMDRF) will leverage these principles to advance good machine learning practices, specifically as they apply to medical technology.

Applying AI to healthcare innovation

Regulators and industry alike recognize that AI and machine learning are transforming healthcare through their ability to derive insights from the vast amount of data generated during day-to-day health interactions. By tapping into that data, medical device manufacturers can create products that help healthcare providers improve patient care.
Examples include imaging systems that use AI algorithms to provide diagnostic information on skin cancer and smart sensor devices that can estimate the likelihood of someone having a heart attack.

However, there are important design and development considerations. Key among these is the dataset, which is crucial for training and validation of AI/ML algorithms used in medical devices. Manufacturers must determine the quality of the dataset used to train the model and any potential bias with the data. Questions to consider include: Is the patient population relevant to the use case? Is the data being used to train the model going to be available at the point of prediction? Has permission been granted to use the data for this purpose? What bias does the healthcare professional potentially bring into the dataset? Is there bias within the patient demographics? Are there issues with regards to drift, such as, is the data similar to the training data and will the model work on new data?

There are also ethical considerations, including that manufacturers ensure the actions of the AI algorithm have a net good to society, that it avoids entrenching historical disadvantage and discriminating on sensitive features, that stakeholders are sufficiently informed so they can make informed decisions, and that there are high standards of governance over the design, training, deployment and operation of AI algorithms.

Performance and clinical evaluation are also key in AI development. Performance evaluation must be properly completed to ensure regulators understand that the software performs as intended.

One of the biggest challenges for manufacturers is determining when their algorithm is good enough to be validated, especially if it is adaptive. It is important, therefore, to establish a pre-release validation plan that identifies particular pre-specifications, so when the software meets the target then the validation activities can commence. Regulators are aware of the evolving nature of AI, so justification with initial validation is key; however, much of this will be controlled under the change control protocols. Final algorithms must be validated before release, based on the specifications and algorithm change protocols, and training and validation data sets must be independent.

Tackling cybersecurity

One major area of concern for governments, regulators, healthcare organizations and patients alike is cybersecurity.

Regulators have taken significant steps to tackle issues with regards to cybersecurity. In 2022, Australia updated its cybersecurity guidance, while China, Japan, Korea and Singapore have all tightened their cybersecurity measures. Also in 2022, the US released a draft update of its guidance, Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions. Other jurisdictions to tighten their cybersecurity guidance include Europe, the UK and Brazil.

The FDA draft guidance document introduced a new term: secure product development framework (SPDF), which is a set of processes that reduces the number and severity of vulnerabilities in products throughout the device lifecycle. These processes include security risk management, designing for security, security controls, regulatory submission and ongoing monitoring. If vulnerabilities are identified, risk management processes are once again initiated.

Pre-market SPDF considerations include:
• Designing for security – ensuring authenticity, authorization, availability, confidentiality and secure and timely updatability
• Transparency — information on the vulnerabilities to users, communication interfaces, and third-party software vulnerabilities
• Security risk management – threat modeling, assessing third-party components, addressing unresolved anomalies and documenting risk management

WEBINAR: cybersecurity and AI in the brave new world of SaMD

This webinar will unpack some of these challenges to help understand the current regulatory climate, specifically in the area of cybersecurity and AI/ML. We will include an update on how regulators are addressing these topics and provide a practical perspective on how manufacturers can best respond to the challenges. 

If risks are identified, manufacturers are expected to implement security controls that consider issues such as authentication, authorization, execution integrity, confidentiality, event detection and resilience and recovery.

One increasingly important consideration is security architecture views, as regulators request documentation to define the security architecture in premarket submissions. Of course, before any product is released, manufacturers should conduct cybersecurity testing to determine vulnerabilities and mitigate any likely threats.

In March 2023, the FDA published the final guidance document Cybersecurity in Medical Devices: Refuse to Accept Policy for Cyber Devices and Related Systems Under Section 524B of the FD&C Act. This guidance is in relation to changes to the FD&C section 524B “Ensuring Cybersecurity of Devices.” provisions. From 1 October 2023, the FDA may issue “refuse to accept” decision if a premarket submission does not comply with the cybersecurity provisions in section 524B. This highlights the ever-growing scrutiny of regulators in relation to cybersecurity.

The future of SaMD and AI

AI is a growing area with medical devices, and many regulators have identified a need to help industry manage the complexities that inevitably are tied to AI development. The guidelines that have been issued can assist manufacturers to mitigate risk. Key, however, will be ensuring that AI datasets and algorithms are considered carefully to minimize bias and address ethics, and that cybersecurity is considered holistically through the product lifecycle.

Disclaimer:

This blog is intended to communicate PharmaLex’s capabilities which are backed by the author’s expertise. However, PharmaLex US Corporation and its parent, Cencora, Inc., strongly encourage readers to review the references provided with this article and all available information related to the topics mentioned herein and to rely on their own experience and expertise in making decisions related thereto as the article may contain certain marketing statements and does not constitute legal advice. 

Related Support

Related Blog & Articles

Scroll to Top