What Project Elsa means for the FDA’s approach to AI

Reg Blog

What Project Elsa means for the FDA’s approach to AI

Artificial intelligence plays an integral role across the product lifecycle in the pharmaceutical, biotechnology, and medical devices industries. Regulatory authorities have been taking steps to respond to the exponential growth of AI functionalities with various guidances, frameworks, and discussion papers.

Enter Elsa, the US Food and Drug Administration’s (FDA) new AI tool, which, according to FDA Commissioner Marty Makary, seeks to modernize how the agency functions, including speeding up clinical protocol reviews and reducing time to review completion[1].

Formerly launched in early June, Elsa is designed to help agency staff work more efficiently by assisting with activities such as reading, writing, and summarizing. It was built within a high security GovCloud environment, ensuring employees only access internal data and that data is kept secure[2].

During early rollout, some users flagged quality issues with output and versioning—highlighting the need for human-in-the-loop oversight and underscoring FDA’s own acknowledgment that hallucination must be actively prevented. This reinforces the importance of validation and transparency for any AI use—even internally.

“If users are utilizing Elsa against document libraries and it was forced to cite documents, it can’t hallucinate,” FDA Chief AI Officer Jeremy Walsh said in a recent interview. “If users are just using the regular model without using document libraries, Elsa could hallucinate just like any other large language model[3].”

Project Elsa: Privacy and compliance considerations

 

The U.S. Food and Drug Administration (FDA) has adopted the Elsa artificial intelligence (AI) tool to support various pharmacovigilance (PV) activities, including literature review, triage, and case intake. The adoption of this tool reflects the FDA’s ongoing commitment to leveraging technological innovation under disciplined controls, and it is relevant to our ongoing discussions around the responsible use of AI in regulated environments.

Summary of FDA use of Elsa

The Elsa tool is being used as a support mechanism within the FDA’s PV and regulatory submission ecosystem. Its primary functions include:

  • Automated literature review to identify adverse event signals
  • Triage and prioritization of case information based on predefined medical and regulatory parameters
  • Support for case intake, ensuring more timely processing of individual case safety reports (ICSRs)
  • Medical writing and submission assistance, especially in streamlining drafting and documentation tasks for regulatory filings

Importantly, the FDA has emphasized that Elsa is not a replacement for human review or decision-making. Instead, it serves to enhance the efficiency, accuracy, and consistency of workflows while remaining subject to FDA oversight and quality assurance protocols. This reflects the FDA’s cautious and measured approach to AI governance in line with its broader digital transformation strategy.

US privacy law considerations

From a U.S. data privacy standpoint, the use of Elsa appears to be consistent with existing federal privacy frameworks, provided that:

Personal data is adequately de-identified

U.S. federal privacy law—particularly under HIPAA where applicable—permits the use of de-identified data for public health and regulatory purposes without the need for individual consent.

In the context of pharmacovigilance, most of the individual case data handled for literature-based surveillance and signal detection is either anonymized or stripped of directly identifiable elements.

Use is within statutory authority

The FDA is acting within its regulatory remit to ensure public safety by leveraging AI tools for PV. The agency has statutory authority under the Federal Food, Drug, and Cosmetic Act to collect and process safety data.

Vendor oversight and contractual safeguards

If Elsa involves third-party service providers or vendors, those engagements must be governed by appropriate contractual safeguards, such as Business Associate Agreements (where applicable), and include clear obligations around data use, security, and confidentiality.

Risk-based approach to AI deployment

U.S. regulators, including the FDA and the Department of Health and Human Services (HHS), continue to encourage a risk-based approach to AI deployment. Transparency, accountability, and human oversight are key tenets, all of which the FDA has confirmed are in place with the use of Elsa.

No commercial repurposing

Data collected or processed using AI tools for regulatory purposes must not be repurposed for commercial gain unless consistent with privacy laws and informed consent requirements. The FDA’s deployment of Elsa is strictly for regulatory and safety monitoring purposes.

Final observations

The FDA’s disciplined and limited deployment of the Elsa tool highlights how AI can be responsibly used to support regulated functions such as PV, without compromising data protection principles. From a privacy governance perspective, it sets a helpful precedent: using AI not to automate human judgment, but to augment expert review and improve regulatory efficiency under appropriate oversight.

Cencora will continue to monitor the privacy developments, particularly as more regulators adopt AI in the regulatory science and PV domains.

An AI-driven framework: Elsa in context

Elsa’s development clearly aligns with recent AI-related frameworks that carry meaningful implications for sponsors—particularly in pharmacovigilance, regulatory operations, document automation, and advanced pharmaceutical manufacturing.

In a paper initially published in 2023 and revised in February 2025 on the use of AI[4], the FDA explored major themes regarding the application of AI and machine learning (ML), including current and potential AI uses to enhance drug development. The paper, which has sought to facilitate dialogue with all stakeholders, explores the opportunities that exist across the entire product identification, selection, and development process as well as for postmarketing purposes. Key among these are safety surveillance, such as signal detection and literature mining, and the potential to improve the manufacturing process by improving equipment reliability and early warnings.

In signal detection and signal management, there is potential to use AI tools to identify patterns, as long as PV teams can interpret those patterns. Other areas where AI is already being used and where there is greater potential to expand its use include literature screening and interpretation, automation of aggregate safety reporting, and even the development and maintenance of the pharmacovigilance system master file (PSMF), which is a requirement in the European Union[5]. AI could also be deployed to automate the safety data quality check, which, if properly validated, could improve the safety database workflow and increase quality oversight.

Stakeholders have been encouraged to consider and engage with the agency in three key areas in the context of leveraging AI for drug development. These are:

  • Human-led governance, accountability, and transparency
  • The quality, reliability, and accuracy of the data
  • Development, performance, monitoring, and validation of the model

With Elsa, the FDA has demonstrated its recognition that AI can play an important role in high-volume data environments. This paves the way for industry to adopt tools to support key processes, so long as the proper oversight and validation is in place. Validation is key and requires that the tools are tested and those tests are documented and certified.

The agency also published an internally focused discussion paper on AI and drug manufacturing to explore AI’s potential for monitoring and controlling advanced manufacturing practices within a risk-based regulatory framework[6]. The paper outlines how the FDA governs its own use of tools, emphasizing the importance of standards for validating AI models; clarity regarding regulatory assessment and oversight; and appropriate data safeguards.

Making sense of Elsa in use cases

The implication for both industry and agency reviewers and inspectors is that FDA expects AI to be deployed with clear governance and guardrails in any regulated workflow to mirror the guardrails the agency recommends as it continues its policy development. Elsa further embodies this framework, underscoring the importance of strict containment and taking steps to ensure AI tools deployed by industry do not hallucinate.

For the agency, Elsa represents a step toward the goal of establishing a real-time regulatory environment. The agency sees AI as offering the potential to have immediate oversight of safety reports, which would improve postmarket and premarket surveillance3.

For sponsors, Elsa could provide something akin to a blueprint for the development of AI-enabled capabilities, such as NLP tools for pharmacovigilance functions, including literature triage, case intake and MedDRA term suggestions.

By keeping the objectives of Elsa in mind when developing PV tools, sponsors ideally would ensure auditability of data, such as source traceability, model validation, and keeping human-in-the-loop review.

Conclusion:

Project Elsa offers a clear glimpse into how FDA is approaching AI: it’s not about flashy capabilities, but about disciplined control. By designing Elsa to operate offline, prohibit hallucinations, and focus strictly on internal document retrieval, FDA is modeling a risk-based, auditable, and fully contained approach.

For industry, the message is clear—AI tools intended for use in pharmacovigilance, regulatory writing, or submission support must prioritize transparency, validation, and functional limits over ambition. While the agency continues to update Elsa, it is important to recognize that the tool clearly flags FDA intention. Elsa is not an AI experiment, rather it is a regulatory signal to industry to adopt a structured and cautious approach to AI development. Elsa is less of a sandbox and more of a blueprint.

About the author:

Michael Day, Ph.D., is Senior Director of Regulatory Strategy and CMC at PharmaLex. With more than 25 years of experience in pharmaceutical development and regulatory affairs, Mike specializes in U.S. FDA strategy, regulatory due diligence, and CMC lifecycle management for both early- and late-stage assets. He has led global development programs across a wide range of therapeutic areas, including biologics, biosimilars, and complex combination products.

[1] FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People, YouTube. https://www.youtube.com/watch?v=jp6TvncQYMU

[2] FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People, FDA news release, 2 June 2025. https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people

[3] FDA: If used for document libraries, Elsa cannot hallucinate; unlikely to be connected to the Internet. RAPS, 20 June 2025. https://raps.org/news-and-articles/news-articles/2025/6/fda-says-elsa-can-t-hallucinate,-unlikely-to-ever

[4] Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products, FDA. https://www.fda.gov/media/167973/download

[5] Guideline on good pharmacovigilance practices (GVP), EMA and HMA. https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-good-pharmacovigilance-practices-module-ii-pharmacovigilance-system-master-file-rev-2_en.pdf

[6] Artificial Intelligence in Drug Manufacturing, FDA, 2023. https://www.fda.gov/media/165743/download

.

Disclaimer:

This blog is intended to communicate PharmaLex’s capabilities which are backed by the author’s expertise. However, PharmaLex US Corporation and its parent, Cencora, Inc., strongly encourage readers to review the references provided with this article and all available information related to the topics mentioned herein and to rely on their own experience and expertise in making decisions related thereto as the article may contain certain marketing statements and does not constitute legal advice. 

Contact us for more information

Scroll to Top