OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 14:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

In Reference to <i>Evaluation of Oropharyngeal Cancer Information from Revolutionary Artificial Intelligence Chatbot</i>

2024·1 Zitationen·The LaryngoscopeOpen Access
Volltext beim Verlag öffnen

1

Zitationen

2

Autoren

2024

Jahr

Abstract

We would like to share our thoughts on the publication “Evaluation of Oropharyngeal Cancer Information from Revolutionary Artificial Intelligence Chatbot”.1 This study was conducted to evaluate the accuracy and readability of chatGPT3.5 for human oral cancer information. Questions about human oral cancer were asked to chatGPT3.5 in three areas: diagnosis, treatment options, and post-treatment care. After scoring by the physician raters, it was found that among them, chatGPT3.5 answered the post-treatment related questions most accurately and comprehensively, similar to the physician's responses, followed by the treatment-related questions, and the diagnosis-related questions had the largest gap in the physician response scores. In all three domains, post-treatment-related questions scored significantly higher than diagnosis-related questions. Based on this study, it was determined that chatGPT3.5 in human oral cancer information is of suboptimal educational value and has the potential to mislead patients. In addition to the limitations listed in the article, we believe that there are some flaws in this study; first, the lack of a specific time to use chatGPT3.5 to ask for information about oral cancer in humans, there are some time limitations in research on Large language models (LLM) such as chatGPT, and there are some time limitations in using chatGPT3.5 at different times. ChatGPT3.5 has the potential to yield diverse responses, as the utilization of chatGPT3.5 to inquire about the same subjects in this research may yield slightly varying outcomes.2 The precise duration of the questioning process can be more accurately evaluated to determine the chatGPT3.5's capacity to handle information pertaining to oral cancers in humans; furthermore, the findings derived from this study through three iterations of questioning chatGPT3.5 may be more precise. By asking questions, you may be able to get more precise results, but if you only ask one question, the results may be biased. To gain a better understanding of the benefits and drawbacks of large language models (LLMs) in medical applications, further research should be done with a more rigorous approach and a greater focus on large-scale data when assessing chatbots such as chatGPT3.5. Given the fact that chatGPT3.5 is free of charge and has a broad user base, further exploration into chatGPT3.5 can concentrate on post-treatment matters and patient instruction in human oral cancer. Furthermore, the more sophisticated chatGPT4.0 can be employed for further exploration and investigation into the diagnosis and management of human oral cancer.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen