Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Barriers and Solutions to Efficient Health Care AI Implementation
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Health care artificial intelligence (HAI) is rapidly expanding, with use cases addressing a variety of maladies (eg, risk of cardiovascular event, malignant neoplasms, falls).Close attention must be paid as systems implement HAI relating to stigmatized conditions (eg, suicide), since models often rely on specific data and face unique challenges with performance, design, and evaluation.While challenges exist in the application of these models, solutions are few.We discuss gaps and solutions relating to 4 domains (data, algorithmic performance, implementation, and evaluation) regarding the implementation of HAI models identifying patients at high risk of stigmatized outcomes. Gaps in Data: It Begins With Data Gap 1: Data We Do Not MeasureUnmeasured confounders hinder model accuracy and impact. 1 For example, data related to social determinants of health (eg, financial, housing, and food insecurity) are often unmeasured and/or undocumented in electronic health records. 2 Their inclusion could improve model accuracy, particularly in stigmatized conditions, such as suicide. 3 Solution 1: Prompting to PromptThe future of HAI could involve a prompting to prompt approach, in which models detect when unmeasured data would improve model accuracy and prompt clinicians to gather this information from patients.For example, if data regarding food or housing insecurity would improve a model's accuracy or impact, clinicians could be prompted to assess and document such data in real time. Gap 2: Data We Do Not AddressOnce captured, modifiable risk factors in stigmatized conditions are often not presented to clinicians in an actionable manner. 4Many of these factors, such as social isolation, physical activity levels, alcohol consumption, diet, and sleep habits, could be acted on to reduce risk of stigmatized conditions, increasing impact of our models. 5,6 Solution 2: Actionable InsightIn some sites, informaticians successfully embedded actionable interventions into clinician workflows.For instance, a model identifying patients at high risk of suicide prompted clinicians at an outpatient neurology clinic to screen for suicidality.Integrating this actionable step increased the likelihood of appropriate screening and subsequent referrals to mental health services. 7This demonstrates the potential of actionable prompts to enhance HAI model translation and effectiveness. Gap 3: Data We Do Not HaveLack of interoperable data results in a fragmented, incomplete assessments, reducing model accuracy. [8][9]][10] Key opportunities exist for advancements in interoperability.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.