Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluation of Artificial Intelligence-generated Responses to Common Plastic Surgery Questions
4
Zitationen
2
Autoren
2023
Jahr
Abstract
Sir: We would like to share our ideas on the publication “Evaluation of Artificial Intelligence-generated Responses to Common Plastic Surgery Questions.1” The study assessed Bing and ChatGPT’s accuracy in giving information on breast-implant–associated sickness, anaplastic large lymphoma, squamous cancer, and plastic surgery issues. The researchers asked queries and compared the AI systems’ responses to data from reliable sources such as the Food & Drug Administration and the American Society of Plastic Surgeons (ASPS) websites. According to the results, both Bing and ChatGPT offered accurate replies; however, Bing had a lower overall accuracy rate than ChatGPT. Bing’s comments were shorter, less thorough, and referenced both verified and questionable sources, whereas ChatGPT did not. Although the study sheds light on the accuracy of AI systems in providing information on certain medical concerns, there are several caveats to be aware of. The study looked at only breast implant-related disorders and plastic surgery. It makes no mention of the AI systems’ accuracy in delivering information about other types of cancer or medical disorders. The study assessed accuracy by comparing it with data from the Food & Drug Administration and ASPS websites. Although these are credible sources, there is no guarantee that the material on these websites is always correct or up to date. The study did not specify the complexity of the questions or the level of difficulty of the plastic surgery in-service examination questions. The lack of context makes determining the genuine performance of AI systems difficult. Sensitive information should not be created, changed, or accepted by AI if human review is an option.2 On ChatGPT, you can discover a lot about issues and solutions. The results of ChatGPT suggest that some of these datasets might include false presumptions or notions. Patients may receive false or misleading information as a result. Before deploying chatbots and AI in academic settings, we must think about the ethical ramifications. One of the most crucial criteria is that AI systems must be built and managed by experts. To prevent flaws, biases, and potential hazards, artificial intelligence systems must be continuously developed, tested, and monitored. DISCLOSURE The authors have no financial interest to declare in relation to the content of this article.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.197 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.047 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.410 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.