Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transparency and Authority Concerns with Using AI to Make Ethical Recommendations in Clinical Settings
3
Zitationen
2
Autoren
2024
Jahr
Abstract
In response to recent proposals to utilize artificial intelligence (AI) to automate ethics consultations in healthcare, we raise two main problems for the prospect of having healthcare professionals rely on AI-driven programs to provide ethical guidance in clinical matters. The first cause for concern is that, because these programs would effectively function like black boxes, this approach seems to preclude the kind of transparency that would allow clinical staff to explain and justify treatment decisions to patients, fellow caregivers, and those tasked with providing oversight. The other main problem is that the kind of authority that would need to be given to the guidance issuing from these programs in order to do the work set out for them would mean that clinical staff would not be empowered to provide meaningful safeguards against it in those cases when its recommendations are morally problematic.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.