Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The future of AI regulation in drug development: a comparative analysis
6
Zitationen
4
Autoren
2025
Jahr
Abstract
As artificial intelligence (AI) transforms drug development, regulatory frameworks are evolving to oversee its implementation, particularly at the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA). This paper makes three contributions to understanding emerging regulatory approaches. First, we offer a comparative analysis of how these agencies have responded to AI-driven advances, incorporating new US executive orders and the European Union (EU)'s AI Act. Second, we propose a novel analytical framework to understand regulatory divergence: the FDA's flexible, dialog-driven model contrasts with the EMA's structured, risk-tiered approach, reflecting broader institutional and political-economic differences. While the former encourages innovation via individualized assessment, it can create uncertainty about general expectations; by contrast, the EMA's clearer requirements may slow early-stage AI adoption but provide more predictable paths to market. Third, we examine whether AI applications-spanning target identification, generative chemistry, and clinical trial 'digital twins'-are mature enough for standardized regulation, particularly amid shifting US policies and the EU's structured oversight regime. Our analysis reveals patterns of convergence on risk-based principles but persistent transatlantic implementation differences, compounded by diminished US engagement in international cooperation. We conclude that heightened regulatory uncertainty in the USA under a new administration's 'America First' stance and more stable, formalized rules in Europe both pose opportunities and challenges to AI-driven innovation in drug development.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.