OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.04.2026, 08:45

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Tailoring Discharge Summaries to Caregivers' Needs: part 1 of the 'Framework & Implementation of AI Tools' (FRAIT) Project (Preprint)

2025·2 Zitationen
Volltext beim Verlag öffnen

2

Zitationen

4

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> Discharge summaries are critical for continuity of care but often lack clarity and personalization, making it difficult for healthcare providers to retrieve essential information. While large language models (LLMs) offer potential for automating summary generation, their effectiveness depends heavily on the quality and contextual relevance of the prompts used. </sec> <sec> <title>OBJECTIVE</title> The objective of this study was to develop and evaluate a human-centered, replicable framework for creating individualized prompts that guide LLMs in generating discharge summaries tailored to the specific needs of healthcare providers. </sec> <sec> <title>METHODS</title> A multidisciplinary workshop was conducted at Ghent University Hospital with 26 healthcare providers from five institutions, including hospitals and general practitioner networks. Participants brainstormed ideal discharge summary formats, generating 170 ideas categorized into themes such as structure, medical history, medication, and follow-up. These insights informed the development of a 110-item structured questionnaire, distributed to 33 participants. Responses were used to generate personalized and generic prompts, refined using the CO-STAR framework (Context, Objective, Style, Tone, Audience, Response). </sec> <sec> <title>RESULTS</title> Structure/form (24%) and follow-up (16%) were the most emphasized categories in the workshop. The questionnaire confirmed the importance of follow-up and medical history sections. Prompts were generated per participant and by provider type, incorporating frequently selected responses. The CO-STAR framework improved prompt clarity and alignment with clinical expectations. Communication emerged as a new category during the workshop and was universally valued in the questionnaire. </sec> <sec> <title>CONCLUSIONS</title> This study presents a novel, systematic approach to prompt engineering in clinical AI applications. By translating qualitative input into structured, individualized prompts, the framework enhances the usability and relevance of AI-generated discharge summaries. It offers a scalable model for integrating human-centered design into LLM deployment in healthcare, supporting more accurate, context-aware clinical documentation. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen