Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in Gxp Validation: Balancing Automation, Traceability, And Regulatory Trust in The Pharmaceutical Industry
8
Zitationen
1
Autoren
2025
Jahr
Abstract
The pharmaceutical industry is increasingly integrating Artificial Intelligence (AI) and Machine Learning (ML) into critical operations, including Good Practice (GxP) validation processes. While AI offers substantial benefits in automation, efficiency, and predictive capability, the “black-box” nature of many AI models poses significant challenges to regulatory compliance, particularly in areas demanding transparency, traceability, and reproducibility. Explainable Artificial Intelligence (XAI) emerges as a strategic solution, providing interpretable insights into algorithmic decision-making that align with the stringent requirements of regulators such as the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and International Council for Harmonisation (ICH) guidelines. This paper investigates the role of XAI in GxP validation by examining the balance between automation and regulatory trust. The study employs a conceptual framework and case study analysis (FDA’s AI/ML Action Plan, EMA algorithmic transparency guidelines, and pharmaceutical industry pilot programs from Novartis, Roche, and Pfizer) to evaluate the impact of XAI on validation processes. Comparative tables and graphs are used to illustrate key findings, including efficiency gains achieved through automation, the trade-offs between speed and explainability, and the measurable improvements in audit readiness and traceability when XAI is implemented. Results indicate that while traditional validation methods ensure compliance, they are often resource-intensive and inflexible; AI-driven approaches increase efficiency but raise concerns of traceability gaps; XAI-based validation provides a middle ground, optimizing automation while preserving interpretability and regulatory trust. The findings highlight that successful adoption of XAI in GxP validation requires a harmonized approach involving technical explainability tools, regulatory frameworks, and industry–regulator collaboration. Future directions point toward integrating emerging technologies such as blockchain for immutable audit trails and federated learning for privacy-preserving compliance. Overall, this research contributes to regulatory science and pharmaceutical digital transformation by proposing XAI as a viable path to achieving both operational efficiency and trustworthy compliance in highly regulated environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.527 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.419 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.909 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.578 Zit.