Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Authors’ reply: How ChatGPT performs in oral medicine: The case of oral potentially malignant disorders. Oral Diseases. Advance online publication. https://doi.org/10.1111/odi.14750
0
Zitationen
5
Autoren
2023
Jahr
Abstract
We would like to thank Silva Cunha et al. (2023) and Sarode and Sarode (2023) for their interest in our article on ChatGPT and oral potentially malignant disorders (OPMDs) (Diniz-Freitas et al., 2023), aimed at enriching the debate on the possibilities and potential drawbacks of using ChatGPT in the oral medicine setting. ChatGPT can be a valuable tool for the differential diagnosis and clinical decision-making process, but the ultimate responsibility for patient care and the results falls on the healthcare practitioner. ChatGPT is incapable of performing a physical examination and assessing images, which are critical drawbacks in a clinical context such as the management of OPMDs which is largely based on the presence of a set of risk factors such as age, sex, location, size, clinical appearance, and mainly the degree of epithelial dysplasia (Wang et al., 2021). We have therefore emphasized in our article that we should not underestimate the experience of healthcare practitioners acquired through years of rigorous training, research, and clinical practice (Diniz-Freitas et al., 2023). However, the concerns expressed by Silva Cunha et al. (2023) regarding the learning process, the critical assessment of the information and decision making, and the risk of creating a generation of practitioners who are excessively dependent on AI are legitimate and deserve deep debate among researchers and educators. Dentistry students of the “Google Generation” have grown up with ubiquitous access to online information and now prefer instant responses through search engines, videos, and social networks over classical information resources (Burns et al., 2020). ChatGPT is a new tool available online whose limitations include the fact that it can become outdated and generate incorrect unverified information (Diniz-Freitas et al., 2023). However, the lack of updating is also one of the disadvantages of medical books (the gold standard) (Tez & Yildiz, 2017), which often contain inaccuracies, as also occurs with scientific articles (Ioannidis, 2005; Jeffery et al., 2012). This is not the first time that educators face the challenges imposed by technological innovations. In recent decades, technology has been progressively incorporated into all aspects of dental education (Ali et al., 2023). As with other online resources, ChatGPT can naively generate incorrect and unverified information (Díaz-Rodríguez et al., 2023). Human users are therefore responsible for ensuring than the content is accurate. The fact that, to date, ChatGPT is not required to indicate the source(s) of the information it provides represents a significant problem in terms of responsibility. As a result, ChatGPT currently has the potential risk of generating harmful information (Moskatel & Zhang, 2023), with deep implications for patient wellbeing (Oh et al., 2023). A comprehensive assessment of its responses should therefore be conducted, and it is imperative to evaluate the reliability and efficacy of its responses in various contexts, comparing its consistency with other consensus documents and guidelines. Paradoxically, not only is the role of the educator of medical disciplines in this new scenario not overshadowed but rather it becomes even more relevant; the educator should act as a supplier of information and a facilitator of learning, promoting students' development of critical evaluation skills and their implementation in evidence-based practice (Chuenjitwongsa et al., 2018). It is the educator's responsibility to implement the technology as an instrument to offer high-quality academic instruction, rather than feeling intimidated by its presence (Stein et al., 2014). Sarode and Sarode (2023) state that ChatGPT, as a common informational resource for clinicians and patients, could alter patient-physician trust. This contradicts the opinion of other authors who in fact consider that one of the most promising applications of artificial intelligence in medicine is the development of conversation agents (chatbots) that provide information and support for patients to manage their health conditions (Souza et al., 2023). It has been shown that the information available online on OPMDs, even prior to the emergence of ChatGPT, is scarce, difficult to read, and unreliable (Alsoghier et al., 2018; Wiriyakijja et al., 2016). This lack of information should encourage clinicians to individually assess their patients' informational needs, in order to support a well-informed decision-making process (Alsoghier et al., 2023). In short, we recommend prudence in using large language models such as ChatGPT in all dentistry specialties, including oral medicine. ChatGPT is a new resource with undeniable potential but with obvious limitations and will ultimately be implemented in the classroom and in clinical practice. We should take it as an opportunity that will likely require educators to reconsider the teaching-learning process and require clinicians to review the physician-patient relationship and the decision-making process. Márcio Diniz-Freitas: Conceptualization; writing – original draft; writing – review and editing. Berta Rivas-Mundiña: Writing – review and editing. José Ramón García-Iglesias: Writing – review and editing. Eliane García-Mato: Writing – review and editing. Pedro Diz-Dios: Writing – review and editing; writing – original draft. Not applicable.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.