OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 16:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI in health: keeping the human in the loop

2023·39 Zitationen·Journal of the American Medical Informatics AssociationOpen Access
Volltext beim Verlag öffnen

39

Zitationen

1

Autoren

2023

Jahr

Abstract

AI in health: keeping the human in the loopPublic discourse about artificial intelligence (AI) and generative AI, in particular, is ubiquitous.AI has been a focus of research in biomedical and health informatics since its inception and in publications the Journal of American Medical Informatics Association since its inaugural issue that included a threaded bibliography on medical diagnostic decision support systems by Dr. Randy Miller (a future JAMIA Editorin-Chief). 1 In that same issue and apropos of the title of this editorial, Dr. Ted Shortliffe's provocative editorial was entitled "Dehumanization of patient care-are computers the problem or the solution?" 2 This month I highlight 5 papers focused on AI that provide key lessons about the importance of keeping the human in the loop.Lyell et al 3 examined real-world safety problems involving machine learning (ML)-enabled medical devices by analyzing safety events reported to the US Food and Drug Administration's Manufacturer and Use Facility Device Experience program.Using an existing framework for safety problems with health information technology, they identified whether a reported problem was due to the ML device or its use, and key contributors to the problem.They also classified the consequences of events.The majority of the 266 safety events were associated with ML devices that primarily used imagebased data as compared to signal-based data.Ninety-three percent of problems involved the ML device with 82% related to data acquisition and <10% to algorithm errors.Sixteen percent of the events resulted in harm.Use problems (7%) were 4 times more likely than device problems to cause harm.This study highlights the need to approach ML device safety from a whole-system perspective including user interactions with devices rather than focusing only on the algorithm.ChatGPT has stimulated much debate among the public as well as in our field of biomedical and health informatics about the use of large language models and generative AI.Liu et al 4 compared the utility of ChatGPT (n 37) versus humangenerated (n 29) suggestions for improving 7 clinical decision support (CDS) alerts.Five clinicians rated the suggestions on usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy.Nine of the 37 (24%) recommendations generated by ChatGPT were among the 20 highest rated suggestions and clinicians perceived them as offering unique perspectives.The study findings suggest that ChatGPT could complement but not replace human reviewers in optimizing the CDS alert logic.March 11, 2023 marked the end of the federal public health emergency for COVID-19 in the United States.However, public health considerations around Long COVID remain.With the goal to de-black-box ML-based phenotype algorithms for Long COVID, the Case Study in this issue

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareMobile Health and mHealth Applications
Volltext beim Verlag öffnen