Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
To Incorporate or Not to Incorporate AI for Critical Judgments: The Importance of Ambiguity in Professionals’ Judgment Process
7
Zitationen
3
Autoren
2020
Jahr
Abstract
Artificial intelligence (AI) technologies are promising to transform how professionals are conducting their work. In many cases of professional work, AI is being used to augment rather than replace human judgment as professionals are still expected to take responsibility for the outcome. Yet, organizational researchers are only starting to understand how this augmentation unfolds in practice. We add to this emergent understanding by conducting an in-depth field study in a major US hospital where AI tools were being used within three different diagnostic radiology specialties for critical diagnoses: breast cancer, lung cancer, and bone age. Only in one of the three diagnostic settings did professionals meaningfully incorporate AI results into their final judgments, while in the other two settings, AI results were consistently overruled. This study unpacks how professionals manage ambiguity in their judgment-forming processes and how it relates to the way they experience opacity of AI tools. In order to make a diagnosis and take professional responsibility for their judgment, physicians have developed approaches of reducing ambiguity through sequential stages of their judgment-forming process. Their use of AI tools, however, tended to increase ambiguity because physicians lacked the practical ability to interrogate AI results and their professional judgments often conflicted with these results. Professionals ended up incorporating AI’s results into their judgments only when they were able to reduce this overall ambiguity in subsequent stages of their judgment process. Our findings unpack the challenges involved in augmenting professional judgment with opaque, but powerful technology.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.