Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How do medical professionals make sense (or not) of artificial intelligence? A social-media-based computational grounded theory study (Preprint)
0
Zitationen
5
Autoren
2022
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence (AI) holds tremendous potential for healthcare, as has been demonstrated across various use cases ranging from automated triage to assisted diagnosis. However, the limitations of AI must also be carefully considered in a fact-based debate on optimal use scenarios. In light of the prominent discussion around trust issues with AI, it is important to assess how and what physicians think about the topic in order to avoid general resistance to technology among medical practitioners. </sec> <sec> <title>OBJECTIVE</title> The aim of the present study was to identify key themes in medical professionals’ discussions of AI and to examine these themes for existent perceptions of AI. </sec> <sec> <title>METHODS</title> Using a computational grounded theory approach, 181 Reddit threads in the medical subreddits of r/medicine, r/radiology, r/surgery, and r/psychiatry were analyzed in order to identify key themes. We combined a quantitative, unsupervised machine learning approach for detecting thematic clusters with a qualitative data analysis for gaining a deeper understanding of the perceptions that medical professionals have of AI. </sec> <sec> <title>RESULTS</title> Three relevant key themes – (1) the perceived consequences of AI, (2) perceptions of the physician–AI relationship, and (3) a proposed way forward – emerged from the Reddit analysis. The first and second themes, in particular, were found in posts that appeared to be partially biased toward physicians’ fear of being replaced, toward the physicians’ skepticism of AI, and toward the physicians’ fear that patients may not accept AI. The third theme, however, involves a way forward and consists of factual discussions about how AI and medicine have to develop further in order to lead to the broad adoption of AI as well as to fruitful outcomes for healthcare. </sec> <sec> <title>CONCLUSIONS</title> Many physicians aim to yield the greatest value from AI for their patients and thus engage in constructive criticism of the technology. At the same time, a concerningly large number of physicians demonstrate perceptions that appear to be at least partially biased and that could hinder both successful use-case implementation and societal acceptance of AI in the future. Therefore, such biased perceptions need to be monitored and – where possible – countered. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.