Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Beyond the Black Box: Avenues for Transparency in Regulating Radiological AI/ML-enabled SaMD via the FDA 510(k) Pathway
4
Zitationen
5
Autoren
2024
Jahr
Abstract
Abstract Background The majority of AI/ML-enabled software as a medical device (SaMD) has been cleared through the FDA 510(k) pathway, but with limited transparency on algorithm development details. Because algorithm quality depends on the quality of the training data and algorithmic input, this study aimed to assess the availability of algorithm development details in the 510(k) summaries of AI/ML-enabled SaMD. Then, clinical and/or technical equivalence between predicate generations was assessed by mapping the predicate lineages of all cleared computer-assisted detection (CAD) devices, to ensure equivalence in diagnostic function. Methods The FDA’s public database was searched for CAD devices cleared through the 510(k) pathway. Details on algorithmic input, including annotation instructions and definition of ground truth, were extracted from summary statements, product webpages, and relevant publications. These findings were cross-referenced with the American College of Radiology–Data Science Institute AI Central database. Predicate lineages were also manually mapped through product numbers included within the 510(k) summaries. Results In total, 98 CAD devices had been cleared at the time of this study, with the majority being computer-assisted triage (CADt) devices (67/98). Notably, none of the cleared CAD devices provided image annotation instructions in their summaries, and only one provided access to its training data. Similarly, more than half of the devices did not disclose how the ground truth was defined. Only 13 CAD devices were reported in peer-reviewed publications, and only two were evaluated in prospective studies. Significant deviations in clinical function were seen between cleared devices and their claimed predicate. Conclusion The lack of imaging annotation instructions and signicant mismatches in clinical function between predicate generations raise concerns about whether substantial equivalence in the 510(k) pathway truly equates to equivalent diagnostic function. Avenues for greater transparency are needed to enable independent evaluations of safety and performance and promote trust in AI/ML-enabled devices.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.