
Aisha Khan, PhD, MBA
University of Miami Miller School of Medicine
United States
FDA Releases Draft Guidance on AI Use in Regulatory Submissions
In February 2025, the U.S. Food and Drug Administration (FDA) issued a draft guidance titled Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. This guidance reflects the agency’s growing recognition of artificial intelligence (AI) as a powerful tool in the development and lifecycle management of medical products, including its potential to support regulatory submissions across manufacturing, nonclinical, and clinical domains.
The guidance is aimed at industry sponsors, applicants, and researchers using AI to generate data or insights intended to support regulatory decisions. It does not apply to AI used solely for internal discovery or operational tools that do not directly impact regulatory outcomes.
Risk-Based Credibility Framework
At the core of the guidance is a seven-step Risk-Based Credibility Assessment Framework, which the FDA recommends sponsors use to demonstrate trust in AI model outputs that support regulatory decisions. This framework outlines a structured, seven-step approach to ensure AI model outputs are scientifically valid and trustworthy:
1. Define the question of interest
2. Determine the model’s context of use (COU)
3. Assess model risk based on decision consequence and model influence
4. Establish credibility goals aligned with the model’s risk level
5. Design a validation plan
6. Perform the credibility assessment
7. Document the approach, results, and rationale
This framework is adapted from FDA-recognized standards (e.g., ASME V&V 40) and encourages a transparent, reproducible approach to model evaluation.
Implications for Sponsors
FDA expects sponsors to clearly define how the AI model contributes to decision-making and assess both model influence (how much the model impacts the decision) and decision consequence (the risk of being wrong). For higher-risk applications—such as dose selection, patient stratification, or in-process control—model validation expectations will be more stringent.
Sponsors are encouraged to:
- Engage the agency early through pre-IND, Type C, or INTERACT meetings
- Document model training data, validation methods, and performance metrics
- Anticipate and mitigate risks related to data drift and model generalizability
- Use real-world evidence or supplemental data sources responsibly, with a focus on reproducibility and traceability
What’s Not Covered
Importantly, this guidance does not apply to AI used in drug discovery or operational efficiencies that do not directly impact product quality or patient safety.
Challenges and Opportunities
While the guidance is nonbinding, it sets clear expectations that could impact the design of AI tools used in regulatory submissions. Importantly, FDA acknowledges the emergent nature of AI and machine learning, allowing for flexibility and updates as technologies evolve.
This draft guidance also raises important questions for sponsors exploring AI/machine learning (ML) in process control, clinical monitoring, or nonclinical modeling, such as:
- What thresholds of explainability and transparency are necessary?
- How will iterative or adaptive models be handled in ongoing trials?
- What is the regulatory path for AI models used across multiple development programs?
Forward-Looking Perspective
As sponsors increasingly explore AI to support decision-making across nonclinical, clinical, and manufacturing domains, this guidance lays the groundwork for consistent regulatory expectations. However, establishing model credibility will require significant upfront planning, detailed documentation, and a clear understanding of context-specific risks. For those pursuing expedited development pathways or considering adaptive AI tools, the bar for regulatory readiness is now more clearly defined.
Final Thoughts: A Foundation for Responsible AI in Regulatory Science
The FDA’s draft guidance represents a significant step toward integrating AI responsibly into regulatory decision-making. By offering a clear structure for assessing AI model credibility, the agency is encouraging innovation while maintaining its focus on patient safety, product quality, and scientific rigor.
For sponsors and researchers aiming to incorporate AI across the drug development continuum—from preclinical modeling to postmarketing surveillance—this guidance should serve as both a roadmap and a reminder: robust validation, documentation, and risk assessment will be critical to future regulatory success.
References:
https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological