Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Late Breaking Abstract - SPIROmetry interpretation in primary care with or without Artificial Intelligence Decision support software (SPIRO-AID)
0
Zitationen
19
Autoren
2024
Jahr
Abstract
<bold>Introduction:</bold> Quality and interpretation accuracy of spirometry are variable in primary care. We aimed to evaluate whether an AI decision support software (ArtiQ. Spiro) improves the diagnostic prediction of primary care clinicians. <bold>Methods:</bold> A parallel, two-group, randomised controlled trial of primary care clinicians in the UK who refer for, or interpret, spirometry. Clinicians were randomised 1:1 to independently interpret fifty de-identified, real-world patient spirometry sessions through an on-line platform either with (AI+) or without (AI-) an AI decision support software report. The primary outcome was diagnostic prediction performance (proportion of the fifty spirometry sessions where clinicians’ preferred diagnosis matched the reference diagnosis, derived by consensus from independent review of primary and secondary care notes and investigations by 3 pulmonologists). A Welch t test was used for analysis. Planned subgroup analysis of cases with a reference diagnosis of COPD (20/50) was performed. <bold>Results:</bold> 234 participants were recruited and randomised from June 2023 to April 2024, 133 completed the study (AI+ n=66, AI- n=67): 73% female, 42% general practitioner, 54% on national spirometry register. The mean (SD) correct diagnostic prediction performance was significantly better for AI+ than AI- (58.7 [7.0]% versus 49.7 [16.6]%; mean [95%CI] treatment effect: 9.0 [4.5-13.3]%, p<0.01), and for reference COPD cases (mean [95% CI] treatment effect: 15.9 [9.0-22.7] %, p<0.0001). <bold>Conclusion:</bold> Addition of AI decision support software improved diagnostic prediction performance in primary care clinicians assessing real-world spirometry, especially for cases of COPD.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- NIHR Leicester Biomedical Research Centre(GB)
- Royal Brompton & Harefield NHS Foundation Trust(GB)
- Harefield Hospital(GB)
- KU Leuven(BE)
- King's College London(GB)
- Papworth Hospital NHS Foundation Trust(GB)
- George Institute for Global Health(GB)
- The George Institute for Global Health(AU)
- University of Southampton(GB)
- Cicely Saunders International(GB)
- Queen Mary University of London(GB)
- Lung Institute(US)
- Hillingdon Hospitals NHS Foundation Trust(GB)
- British Lung Foundation(GB)
- House of Representatives(NL)