Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing Patient Safety Event Analysis Using Artificial Intelligence: A Pilot Study of an Artificial Intelligence–Powered Report Analysis Tool
0
Zitationen
7
Autoren
2025
Jahr
Abstract
OBJECTIVES: To address the challenge of analyzing large volumes of patient safety event (PSE) reports, we developed and evaluated an AI-powered software tool. The primary goal was to assess the tool's potential to support analysts and uncover novel trends in patient safety databases. METHODS: A pilot evaluation was conducted with seven organizations (4 health care facilities and 3 patient safety organizations) to assess the tool's impact on analysts' workflows and their ability to uncover insights. Feedback was gathered through interviews with patient safety analysts using the tool. Two human factors experts analyzed the findings using a human cognition framework for information visualization to identify strengths and areas for improvement. Novel insights from PSE data were systematically recorded, capturing trends and themes that emerged during the analysis process. RESULTS: Participants from 6 of 7 institutions reported that the tool helped identify valuable insights, such as trends in procedural errors, inconsistencies in event categorization, and emerging issues with specific medications and devices. The emerging themes algorithm effectively highlighted previously undetected patterns by grouping related events and emphasizing novel keywords. However, participants noted some irrelevant keywords due to limitations in narrative data quality. The tool's design principles, including chunking information and highlighting key terms, improved efficiency in reviewing reports. CONCLUSIONS: The AI-driven tool demonstrated potential to enhance patient safety by supporting analysts in detecting trends and patterns in PSE reports. Future iterations will address identified limitations and further refine its ability to organize data around user mental models for improved usability.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.