Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Sufficiency of Disclosure in Medical AI Patents: Evidence from 865 Granted Patents Across the US, China, and the EU
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The rise of health care AI raises concerns over whether patent disclosure supports reproducibility and legal validity. This study analyzes 865 granted medical AI patents (2015-2025) from the US, China, and the EU using a five-dimensional framework (algorithm transparency, training data accessibility, model reproducibility, result verifiability, and mathematical support) implemented through NLP-assisted expert scoring. Results suggest limited technical transparency; approximately 40% of patents score zero in at least two dimensions. Performance varies significantly: algorithm transparency is relatively strong (>60% score 2), while training data accessibility is less prevalent (4.6% score 2) and mathematical support is frequently omitted (39.4% score 0). Statistical testing indicates US patents significantly outperform Chinese patents (p < 0.001), while EU results remain exploratory (N = 31, mean 6.2). These patterns appear associated with institutional factors, strategic applicant behaviour, and technical complexity. Such limitations may pose risks to enforceability and market development, highlighting the need for targeted disclosure improvements. This study contributes a replicable framework for translating legal standards into measurable indicators, providing cross-jurisdictional evidence to guide examination, litigation, and policy refinement in medical AI governance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.611 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.504 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.025 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.