Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ABSTRACT NUMBER: ESOC2026YS258 VALUE PROPOSITIONS OF MICROSOFT COPILOT IN SUPPORTING STROKE PATIENTS: TWO PATIENT- CENTERED APPRAISAL'S
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract Background and aims The role of artificial intelligence (AI) in patient-facing support remains largely under-researched. This abstract outlines two projects conducted, which aimed to explore the use of Microsoft Copilot as a tool to educate and support NHS stroke patients. Methods Study 1 - Thematic analysis of qualitative data (patient feedback), collected through in-person interviews with 10 patients. One question from each patient was inputted into Copilot, and their feedback recorded. Study 2: Analysis of the informatic sources utilized by Copilot when answering patient questions, as well as a quantitative analysis of the patient feedback on the Copilot response. Results The studies have both been completed and analysed. Study 1 - six themes emerged: usefulness, knowledge, clarity, trust, AI abilities, and detail. Most feedback indicated patients found the responses useful, knowledgeable, easy to understand, and trustworthy. Although the minority, some found the responses lacked detail and sufficient depth. Study 2 - demonstrated general patient acceptance of AI supported education for their health because relevant and understandable responses were produced by AI. It also identified limitations relating to the accuracy of sources utilized by AI to create the responses. Collectively, the findings provided preliminary insights to further inform the role of AI tools, particularly Co-pilot, as an adjunct to patient support and education in stroke care. It also highlights the need for further research to broaden its application in more complex clinical scenario’s, assess reliability, and establish appropriate governance. No further data collection has taken place at this stage. Conflict of interest Rhea Kakani and Nathan Mistry: nothing to disclose
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.