Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Co‐designing a Large Language Model Benchmarking Dataset for Primary Care with Nurses in Kenya
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Abstract Large Language Models (LLMs) are increasingly applied in healthcare, yet their training and evaluation often lack grounding in frontline realities in low‐resource settings. By grounding content in nurses' everyday practice, this work contributes a localized benchmark for LLM training and evaluation and offers a replicable model for ethical, inclusive AI design responsive to care realities in resource‐constrained environments. It documents the participatory co‐design, curation, and descriptive characterization of a nurse‐generated dataset for LLM benchmarking in primary healthcare (PHC) in Kenya. Using human‐centred design methods, we trained 145 nurses across three counties to generate real‐world clinical scenarios and questions using an adapted SBAR (Situation, Background, Assessment, Recommendation) framework. Through workshops, audio recording and digital submissions, nurses contributed 7,606 scenarios. These scenarios captured decision‐making needs spanning clinical management, referral, communication/counselling, and constraints in diagnostics, equipment, and social context typical of PHC. This article details the co‐design process, data pipeline, and dataset descriptives; benchmarking methods and results using this dataset are reported separately.
Ähnliche Arbeiten
Health professionals for a new century: transforming education to strengthen health systems in an interdependent world
2010 · 5.678 Zit.
Bulletin of the World Health Organization
1955 · 3.658 Zit.
Global Surgery 2030: evidence and solutions for achieving health, welfare, and economic development
2015 · 3.561 Zit.
District Laboratory Practice in Tropical Countries
2005 · 2.655 Zit.
An estimation of the global volume of surgery: a modelling strategy based on available data
2008 · 2.503 Zit.