OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.03.2026, 12:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Abstract DP226: Artificial Intelligence Can Revolutionize Stroke Neurology Treatment Decisions And Stroke Medical Education With Near Perfect Accuracy.

2026·0 Zitationen·Stroke
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2026

Jahr

Abstract

Intro: In just seconds, AI can search, prioritize and synthesize massive amounts of information from the highest impact journals and detailed American Stroke Association (ASA) guidelines many physicians do not even know exist. AI could greatly improve complex, often high-risk decisions Stroke Neurologists and Emergency physicians make in time critical Stroke codes and elsewhere, but AI accuracy and safety remain unproven. We evaluated how accurate, fast and safe AI answers are to complex Stroke Neurology questions based on peer-reviewed literature and ASA guidelines. Methods: 729 complex stroke treatment or teaching questions were extrapolated from 12 current (published in last 5 years) ASA/AHA scientific statements and 50 current JAMA/NEJM/BMJ/LANCET major stroke studies. These questions were posed to Google AI, CHAT-GPT 4, CHAT GPT5 and Open Evidence (OE) from 2/25 through 8/25, resulting in 2,916 AI answers, which were reviewed for accuracy multiple times by a board-certified Vascular Neurologist, a Neurosurgeon, a board-certified Emergency physician and a board- certified Neurovascular APRN. Inaccurate responses to treatment questions contradicting the literature were assigned a potential harmfulness score on The University Of California, San Francisco’s adapted Agency for Healthcare Research and Quality (UCSF/AHRQ) scale ranging from 0 (no harm) to 7 (death). The percent AI scored > 4 (permanent harm) was calculated for each AI mode. Results: OE was significantly more accurate than all other AI modes (95.8%), followed by Google (90.7%), CHAT-GPT5 (88.9%) and CHAT-GPT 4 (75.3%). Google was fastest to answer (2.7 sec), then OE (10.8), CHAT-GPT4 (12.7), CHAT-GPT5 (17). CHAT-GPT4 incorrect treatment answers carried highest risk of permanent patient harm or death, (73.1%), followed by Google (67.4%), CHAT-GPT5 (64.4%), OE (57.1%). Since no single AI achieved near perfect 99% accuracy, we achieved that goal by “simultaneous 2nd opinion” search of the best 2 AI: OE + Google=99% accuracy in just 10.3 seconds with possible permanent harm across its 3 incorrect treatment question responses. Conclusions: OE was significantly more accurate, faster and safer than any version of CHAT-GPT. OE + Google can be simultaneously queried with no delay with near perfect accuracy (99% accurate concordance with stroke literature and guidelines). AI could quickly become an invaluable source of fast, accurate and safe information to revolutionize stroke care.

Ähnliche Arbeiten