Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Symptom Checkers versus Doctors: A Prospective, Head-to-Head Comparison for GERD vs. Non-GERD Cough: 2017 Presidential Poster Award
2
Zitationen
6
Autoren
2017
Jahr
Abstract
Introduction: As patients are increasingly health literate, it is common for them to utilize symptom checker apps prior to seeing a healthcare provider. Critiques of prior symptom checker studies have called for prospective, real-patient cases. We not only aimed to prospectively analyze the diagnostic accuracy of symptom checkers versus doctors, but to evaluate a common complaint—cough—and symptom checker ability to accurately delineate GERD vs. non-GERD cough. Methods: 116 consecutive adult patients presented to an internal medicine clinic with chief complaint of “cough.” The 3 most visited online symptom checkers were utilized (WebMD, iTriage, FreeMD). A questionnaire was designed navigating each of the 3 symptom checker algorithms pertaining to cough. Forms were completed in the office prior to seeing the provider. One investigator independently completed the patient assessment on the initial visit and asked to list his top diagnosis. All original symptom checker questionnaires we later de-identified, and a panel of physicians ranked their top 3 diagnoses. They were then given the original symptom checker forms and the same patient's clinician visit note, without the assessment or plan listed, and asked to list their top 3 diagnosis.Table: No Caption available.Results: 116 patients enrolled, 66 females and 50 males. 26 of 116 patients reported GERD symptoms on the questionnaires. There were no differences in gender or age of those reporting GERD symptoms. The office physician diagnosed 5 patients (5 of 26; 19%) with GERD as the top diagnosis for cough. Subsequent symptom checker analysis alone failed to recognize GERD in the patients the office physician diagnosed with GERD as top diagnosis or inappropriately diagnosed GERD in patients not diagnosed by the physician (Fig. 1). Physicians utilizing just the symptom checker data did not enhance diagnostic performance for the top diagnosis. Physicians given the symptom checker data plus the blinded visit note did yield improvement in diagnostic performance. However, this diagnostic accuracy still lagged behind the initial in-person, office physician. Conclusion: Symptom checkers alone or in combination with visit notes cannot reliably diagnose GERD in patients presenting with a cough. There remains an invaluable component of the real-time office visit experience that adds to physician diagnostic skill, which cannot be retrospectively utilized. We aim to extrapolate these findings across various subspecialties, and in patients presenting to the ER. N=116 unique patients. The numbers in the parentheses are the results consistent with the Clinical Diagnosis (CD) by the physician seeing the patient in the initial office visit (the gold standard for comparison). For example for DOC2A, there are 85 patients not diagnosed as GERD and all are consistent with CD results (=0). DOC2A ranked four patients as “1”, three of them were also ranked as “1” by CD. DOC2A ranked 10 patients as “2” while 2 of them were diagnosed as “1” by CD. DOC2A ranked 17 patients as “3” and none of them were diagnosed as “1” by CD. The clinic physician's top diagnosis for cough yielded the following diagnosis: asthma, bronchitis, heart failure, chronic obstructive pulmonary disease, influenza, pneumonia, sinusitis, upper respiratory infection, gastroesophageal reflux disease. a Patient symptom checker diagnosis versus initial clinical provider diagnosis. b Doctor panel review of patient symptom checker symptoms versus initial clinical provider diagnosis. c Doctor panel review of patient symptom checker symptoms+initial clinical provider visit note (sans assessment and plan) versus initial clinical provider diagnosis.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.