OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.04.2026, 21:33

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

In Response to <scp><i>Evaluation of Oropharyngeal Cancer Information from Revolutionary Artificial Intelligence Chatbot</i></scp>

2024·2 Zitationen·The LaryngoscopeOpen Access
Volltext beim Verlag öffnen

2

Zitationen

2

Autoren

2024

Jahr

Abstract

We read Yang and Jiang's letter with interest. 1Our study highlighted that ChatGPT version 3.5 answers patient inquiries regarding oropharyngeal cancer (OPC) with some, yet imperfect accuracy, comprehensiveness, and similarity to physician responses.This impressive, yet fallible capability to answer patient-centered questions has been confirmed by other otolaryngology studies. 2,3ang and Jiang discuss limitations requiring further discussion.First, we acknowledge that based on their cited Elyoseph et al. article, use of ChatGPT at different times may yield different answers. 4However, the change that Elyoseph et al. demonstrate is a change in "emotional awareness" of ChatGPT's responses and not a change in accuracy or ability to answer medical questions.Given that ChatGPT version 3.5 is not connected to the internet and has limited knoweldge of the world after 2021, thus reflecting a lack of updates to its knowledge base, we propose it is unlikely its average level of accuracy in answering OPC questions would significantly change by later time of query. 5However, ChatGPT version 4 recently opened its knowledge base to the Internet in live time, so the authors' point would be valuable for future studies using this version.The authors also point out that duration of ChatGPT's answer generation process may elucidate information about its capacity to handle OPC-related information.Although this would have been a nice addition, we are unaware of an official source that stratifies ChatGPT's informational capacity by discrete durations of response times to instill significance in this data point.Perhaps this duration could be useful if comparing OPC questions to nonmedical questions when using the same WiFi network at the same time of day to also avoid potential variations in WiFi connectivity and number of live users.Finally, Yang and Jiang mention that asking questions worded more than one way could increase precision of our results.We agree with this sentiment and urge future studies to keep this in mind.However, we still assert validity of our results, as our study maintains strength in using official sample patient questions from National Comprehensive Cancer Network (NCCN) guidelines.The letter also proposes future directions of concentrating on posttreatment matters and exploring ChatGPT version 4. We have great interest in seeing these topics studied further.We additionally wish to highlight again our study's future direction of exploring an OPC-trained large language model-especially considering ChatGPT version 4's recent update allowing users to make their own "GPTs" using specific training criteria. 6Targeted training data could significantly enhance this technology for OPC patients, providing invaluable information outside physician availability.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTracheal and airway disordersRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen