Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Suspended responsibility: The trouble with integrating researchers into shared-responsibility models for machine learning-supported decisions
0
Zitationen
3
Autoren
2025
Jahr
Abstract
With the rise of machine learning-based decision support systems, the responsibility for potential errors is regularly questioned. Scholarly work regarding this issue has criticized a "responsibility gap," that is, unattributable responsibility for the effects of machine learning-supported decisions. One proposition to close the responsibility gap is to include scientists, who lay the basis for the functioning of machine learning models in distributed responsibility models, assigning them a share of responsibility for potential errors. However, to date, responsibility models that include scientists have not gained a foothold in regulation. To provide an empirical basis for the discussion around novel responsibility models, this study provides a social scientific analysis of an interdisciplinary machine-learning consortium that works on machine learning-based decision support systems for healthcare - an area where errors have particularly fundamental consequences for individuals. We investigate researchers' speculations about their responsibility for the downstream effects of their ML research results after translation to clinical practice. We find that researchers point to tensions in the scientific sector as well as to agential, local, and temporal shifts of their research outputs during translation to clinical practice as a major source of what we call "suspension of responsibility." Our insights contribute to debates about novel responsibility models that are fair to patients, doctors, and scientists and can inform similar debates beyond healthcare.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.