Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Tailoring Discharge Summaries to Caregivers' Needs: part 1 of the 'Framework & Implementation of AI Tools' (FRAIT) Project (Preprint)
2
Zitationen
4
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Discharge summaries are critical for continuity of care but often lack clarity and personalization, making it difficult for healthcare providers to retrieve essential information. While large language models (LLMs) offer potential for automating summary generation, their effectiveness depends heavily on the quality and contextual relevance of the prompts used. </sec> <sec> <title>OBJECTIVE</title> The objective of this study was to develop and evaluate a human-centered, replicable framework for creating individualized prompts that guide LLMs in generating discharge summaries tailored to the specific needs of healthcare providers. </sec> <sec> <title>METHODS</title> A multidisciplinary workshop was conducted at Ghent University Hospital with 26 healthcare providers from five institutions, including hospitals and general practitioner networks. Participants brainstormed ideal discharge summary formats, generating 170 ideas categorized into themes such as structure, medical history, medication, and follow-up. These insights informed the development of a 110-item structured questionnaire, distributed to 33 participants. Responses were used to generate personalized and generic prompts, refined using the CO-STAR framework (Context, Objective, Style, Tone, Audience, Response). </sec> <sec> <title>RESULTS</title> Structure/form (24%) and follow-up (16%) were the most emphasized categories in the workshop. The questionnaire confirmed the importance of follow-up and medical history sections. Prompts were generated per participant and by provider type, incorporating frequently selected responses. The CO-STAR framework improved prompt clarity and alignment with clinical expectations. Communication emerged as a new category during the workshop and was universally valued in the questionnaire. </sec> <sec> <title>CONCLUSIONS</title> This study presents a novel, systematic approach to prompt engineering in clinical AI applications. By translating qualitative input into structured, individualized prompts, the framework enhances the usability and relevance of AI-generated discharge summaries. It offers a scalable model for integrating human-centered design into LLM deployment in healthcare, supporting more accurate, context-aware clinical documentation. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.527 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.419 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.909 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.578 Zit.