Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Implementing the human right to science in the regulatory governance of artificial intelligence in healthcare
37
Zitationen
1
Autoren
2023
Jahr
Abstract
Artificial intelligence (AI) enables a medical device to optimize its performance through machine learning (ML), including the ability to learn from past experiences. In healthcare, ML is currently applied within controlled settings in devices to diagnose conditions like diabetic retinopathy without clinician input, for instance. In order to allow AI-based medical devices (AIMDs) to adapt actively to its data environment through ML, the current risk-based regulatory approaches are inadequate in facilitating this technological progression. Recent and innovative regulatory changes introduced to regulate AIMDs as a software, or 'software as a medical device' (SaMD), and the adoption of a total device/product-specific lifecycle approach (rather than one that is point-in-time) reflect a shift away from the strictly risk-based approach to one that is more collaborative and participatory in nature, and anticipatory in character. These features are better explained by a rights-based approach and consistent with the human right to science (HRS). With reference to the recent explication of the normative content of HRS by the Committee on Economic, Social and Cultural Rights of the United Nations, this paper explains why a rights-based approach that is centred on HRS could be a more effective response to the regulatory challenges posed by AIMDs. The paper also considers how such a rights-based approach could be implemented in the form of a regulatory network that draws on a 'common fund of knowledges' to formulate anticipatory responses to adaptive AIMDs. In essence, the HRS provides both the mandate and the obligation for states to ensure that regulatory governance of high connectivity AIMDs become increasingly collaborative and participatory in approach and pluralistic in substance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.357 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.482 Zit.