Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Co-Lifecycle Governance for Learning Medical AI: A Global Hybrid Framework for Synchronizing Regulatory Oversight with Adaptive Intelligence (Preprint)
0
Zitationen
5
Autoren
2025
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> Artificial intelligence (AI) in health care is increasingly defined not by static algorithms but by adaptive intelligence—systems that evolve over time through interactions with data, clinicians, and clinical environments. This adaptive capacity creates a structural mismatch with regulatory frameworks built for technologies whose behavior remains static. As AI models drift, recalibrate, or degrade in real-world contexts, they dissolve the linear boundaries between design, deployment, and clinical interpretation. These temporal, epistemic, and organizational frictions expose responsibility gaps that cannot be resolved through incremental modifications to legacy oversight structures. Regulators across major jurisdictions are beginning to respond to these challenges, though with differing orientations. The United States advances mechanisms for predictable adaptation, including Predetermined Change Control Plans (PCCPs), real-world evidence frameworks, and lifecycle-oriented quality management reforms. The European Union emphasizes precautionary, rights-based governance through the AI Act and modernized liability rules. South Korea, operating within a hyper-connected digital health ecosystem, has introduced the Digital Medical Products Act (DMPA), one of the world’s first comprehensive statutory frameworks for learning medical AI. Despite philosophical differences, these regulatory trajectories converge on a shared insight: learning AI systems cannot be governed by static rules or episodic evaluation. This Viewpoint proposes Co-Lifecycle Governance as a conceptual framework to synchronize regulatory oversight with adaptive intelligence. Rather than treating oversight as a discrete event, Co-Lifecycle Governance frames regulation as a continuous, synchronized process grounded in four pillars: continuous validation, agile change management, proactive performance surveillance, and distributed accountability. Each pillar functions as a structural antidote to the responsibility frictions that arise when AI systems evolve faster than expectations surrounding them. Together, these pillars provide a governance grammar capable of supporting safe, iterative model improvement while maintaining system-level trust. Drawing from the strengths of U.S. predictability, EU accountability, and Korean scalability, this paper outlines a global hybrid pathway that synthesizes predictability, accountability, and operational feasibility. Learning AI will not wait for governance to catch up; oversight must evolve in lockstep with adaptive intelligence. Co-Lifecycle Governance offers a foundation for regulatory systems that not only regulate learning AI, but also learn with it—at the speed at which adaptive intelligence actually changes. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.