OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 06:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluation of Artificial Intelligence-generated Responses to Common Plastic Surgery Questions

2023·4 Zitationen·Plastic & Reconstructive Surgery Global OpenOpen Access
Volltext beim Verlag öffnen

4

Zitationen

2

Autoren

2023

Jahr

Abstract

Sir: We would like to share our ideas on the publication “Evaluation of Artificial Intelligence-generated Responses to Common Plastic Surgery Questions.1” The study assessed Bing and ChatGPT’s accuracy in giving information on breast-implant–associated sickness, anaplastic large lymphoma, squamous cancer, and plastic surgery issues. The researchers asked queries and compared the AI systems’ responses to data from reliable sources such as the Food & Drug Administration and the American Society of Plastic Surgeons (ASPS) websites. According to the results, both Bing and ChatGPT offered accurate replies; however, Bing had a lower overall accuracy rate than ChatGPT. Bing’s comments were shorter, less thorough, and referenced both verified and questionable sources, whereas ChatGPT did not. Although the study sheds light on the accuracy of AI systems in providing information on certain medical concerns, there are several caveats to be aware of. The study looked at only breast implant-related disorders and plastic surgery. It makes no mention of the AI systems’ accuracy in delivering information about other types of cancer or medical disorders. The study assessed accuracy by comparing it with data from the Food & Drug Administration and ASPS websites. Although these are credible sources, there is no guarantee that the material on these websites is always correct or up to date. The study did not specify the complexity of the questions or the level of difficulty of the plastic surgery in-service examination questions. The lack of context makes determining the genuine performance of AI systems difficult. Sensitive information should not be created, changed, or accepted by AI if human review is an option.2 On ChatGPT, you can discover a lot about issues and solutions. The results of ChatGPT suggest that some of these datasets might include false presumptions or notions. Patients may receive false or misleading information as a result. Before deploying chatbots and AI in academic settings, we must think about the ethical ramifications. One of the most crucial criteria is that AI systems must be built and managed by experts. To prevent flaws, biases, and potential hazards, artificial intelligence systems must be continuously developed, tested, and monitored. DISCLOSURE The authors have no financial interest to declare in relation to the content of this article.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingDigital Imaging in Medicine
Volltext beim Verlag öffnen