Introduction
In a world where Artificial Intelligence is being integrated into all aspects of life and could become the trigger for a fifth industrial revolution, governments around the world are implementing various approaches to regulate and control its use.
The European Union is taking the lead with the publication of the European Union’s Artificial Intelligence Act (AI Act) Regulation (EU) 2024/1689[1], representing a groundbreaking approach to regulating artificial intelligence, with profound implications for the medical device industry. This comprehensive regulatory framework aims to address the complex challenges posed by AI technologies while ensuring safety, transparency, and ethical development. All , especially medical device manufacturers, need to understand the new AI Act requirements before introducing AI-based devices onto the EU market. They should also be prepared for the adoption of similar regulations in other territories as the EU AI Act could also require organizations outside of the EU to adhere to the act under certain circumstances[2]. This article will cover the status of AI Regulations in the EU, US, and Australia.
Regulatory Context and Scope
The AI Act aims to enhance the EU market and promote trustworthy AI by protecting fundamental rights and addressing potential harmful effects while supporting innovation.
The Act is applicable to all providers that make an AI system or general-purpose AI model available on the EU market, regardless of where the company is based. Scientific purpose only and Military and National security AI systems are excluded from the regulation.
Classification
The EU AI Act introduces a risk-based approach to AI regulation, categorizing AI systems into different risk levels:
Medical AI devices predominantly fall under the high-risk category, subjecting them to:
– Comprehensive risk management systems
– Rigorous conformity assessments
– Detailed documentation requirements
– Ongoing monitoring and reporting obligations
Conformity Assessment
AI medical devices will require a new certification under the AI Regulations in addition to the CE certification under Regulation EU 2017/745 and 2017/746 (MDR/IVDR)[3].
It is expected that most Notified Bodies will be authorized under the AI Act and therefore will be able to certify medical devices under both the AI Act and MDR/IVDR regulations concurrently. In addition, similar to the implementation of MDR and IVDR, some delays in the availability of authorized NBs are to be expected[4].
Registration
Before putting a medical device with a high-risk AI system on the market or into service, the provider or, where applicable, the authorized representative, is expected to register themselves and their system in the EU AI database, when available1. This is similar to EUDAMED system where basic information of the high-risk AI system and its provider would be listed.
Timeline
Key Dates1:
- 2 Aug 2024 entry into force
- 2 Aug 2025 — transition period begins, new general purpose AI models shall comply with general provisions
- 2 Aug 2026 — most provisions apply
- 2 Aug 2027 — high-risk systems that are part of safety components or medical devices must comply
- 31 Dec 2030 — AI systems part of a large-scale IT system must comply
Specific calendar for medical devices as high-risk AI systems:
Impact
AI medical devices providers will face demanding new regulatory requirements in the EU:
- New notification obligations
- Additional conformity assessment
- Complex compliance and verification testing
In particular, the medical devices economic operators and high-risk AI systems developers are facing new obligations or increased emphasis on ones related to the MDR to also comply with the AI Act1:
- Human oversight: AI systems must be overseen by qualified employees within the company who have appropriate training and authority
- Data quality: Deployers must ensure input data is relevant for its designated purpose
- Data protection assessment: AI deployers must conduct data protection impact assessments before placing any AI system into service
- Record keeping: AI systems deployers or developers must maintain the automatically generated logs
- Transparency: The use of AI systems needs to be communicated to the device users
- Instructions for use: AI systems developers need to include additional information on the capabilities and limitations of the systems and descriptions of the mechanism
- System monitoring: Incidents related to AI systems must be reported
- Risk Assessment: AI systems developers must identify additional risks, such as threats to health, safety or fundamental rights
Australian AI Regulations
The Australian Government’s Department of Industry, Science and Resources (DISR) has released a discussion paper on Safe and Responsible AI[5], acknowledging that Australia’s existing regulatory framework already addresses AI through various laws and policies, including the Privacy Act, Consumer Law, and Therapeutic Goods Administration (TGA) regulations governing Software as a Medical Device (SaMD)4.Notably, Australia has not adopted specific and broader AI legislation akin to the EU’s AI Act. However, through the Safe and Responsible AI consultation process, the Australian Government is developing a tailored approach to regulating AI. This approach prioritizes a national risk-based framework, where AI systems with higher potential risks are subject to more stringent controls. 4
In their interim response laid out in the discussion paper, the DISR has committed to developing an approach grounded in the following principles:
- Proportionate regulation, avoiding unnecessary burdens on low-risk AI applications
- Collaboration and transparency, ensuring public and expert engagement
- International cooperation, aligning with the Bletchley Declaration[6]
- Community-centric, prioritizing people and communities
This framework aims to strike a balance between promoting innovation and mitigating risks associated with AI.4
The TGA is closely working with DISR to assess the impact to the medical device regulations and pre-market processes[7].
United States AI regulations
The importance of enabling these technologies has been recognized by the US Food and Drug Administration (FDA) which has increased the effectiveness of the authorization process for AI-enabled medical devices to reduce the time-to-market for these devices and facilitate innovation[8] .
This is reflected in the wide number of publications, guidelines and plans to address the new challenges offered by AI-based medical devices, offering specific recommendations for the development and submission of premarket authorization with these kinds of devices. Particularly notable are the recent draft guidance Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations[9] and Good Machine Learning Practice for Medical Device Development[10].
However, the FDA is not currently planning to include an independent assessment process similar to the EU, considering that medical devices based in AI will follow the common medical device premarket pathways (PMA, 510(k) or De Novo)[11].
With regards to wider US AI Regulations, the Biden administration was working toward greater controls over AI development across all industries. However, in January 2025, the Trump administration signed an Executive Order eliminating the Biden policies to remove burdensome requirements and promote innovative AI Development.[12] This shift is likely to have little impact on FDA’s existing AI frameworks, but it does highlight the tension between innovation and regulations in as governments try to manage adoption of the technology and potential risks.
Conclusion
The EU AI Act represents a pivotal moment in AI regulation, particularly for medical devices. Its comprehensive approach provides advantages to users such as enhanced patient safety, increased transparency, and a global regulatory leadership.
However, there are many new challenges that medical devices face in terms of additional compliance assessment processes and complex compliance and verification activities. Moreover, the landscape is not linear. Proposed introduction of AI regulations in other jurisdictions is slower, with some countries such as the US removing regulatory obstacles due to shifts in government policies.
In general, medical device manufacturers must adapt and invest in robust compliance strategies to be aligned with any new AI requirements .
About the authors:
Javier Varela
With more than 12 years of experience in the field of regulatory affairs for medical devices, Javier Varela joined PharmaLex Spain in 2018 as Medical Devices team leader for the Iberia region, assisting companies in new medical device registration, market access and transition to the Medical Device Regulation 2017/745 (MDR) and 2017/746 (IVDR), especially focused on software as a medical device.
Yervant Chijian
Yervant Chijian is a seasoned RA/QA consultant specializing in medical device regulatory compliance, helping clients achieve efficient market access in major global markets. With over 20 years of experience in the medical devices industry, Yervant has a strong background in both manufacturing and product development. His expertise encompasses Regulatory Strategic Planning and submissions across key regions, including the United States, Europe, Canada, and Australia.
[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, EUR-Lex, June 2024. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
[2] Do not go gentle into that good night: The European Union’s and China’s different approaches to the extraterritorial application of artificial intelligence laws and regulations, Computer Law & Security Review, July 2024. Do not go gentle into that good night: The European Union’s and China’s different approaches to the extraterritorial application of artificial intelligence laws and regulations – ScienceDirect
[3] Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
[4] The designation of notified bodies under the upcoming Artificial Intelligence Act, Team NB Position Paper. https://www.team-nb.org/wp-content/uploads/members/M2022/Team-NB%20PositionPaper-AI%20Designation-V1-20221216.pdf
[5] Safe and responsible AI in Australia consultation, Department of Industry, Science and Resources. https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/safe-and-responsible-ai-in-australia-governments-interim-response.pdf
[6] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023, Nov 2023. https://www.industry.gov.au/publications/bletchley-declaration-countries-attending-ai-safety-summit-1-2-november-2023
[7] Artificial Intelligence (AI) and medical device software, Department of Health and Aged Care. https://www.tga.gov.au/how-we-regulate/manufacturing/manufacture-medical-device/manufacture-specific-types-medical-devices/artificial-intelligence-ai-and-medical-device-software
[8] Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices, FDA. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
[9] Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, FDA, Jan 2025. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing#:~:text=This%20draft%20guidance%20document%20provides%20recommendations%20regarding%20the,will%20support%20FDA%E2%80%99s%20evaluation%20of%20safety%20and%20effectiveness.
[10] Good Machine Learning Practice for Medical Device Development: Guiding Principles, FDA. https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles
[11] Step 3: Pathway to Approval, FDA. https://www.fda.gov/patients/device-development-process/step-3-pathway-approval
[12] Fact Sheet: President Donald J. Trump Takes Action to Enhance America’s AI Leadership – The White House