Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT in Occupational Medicine: A Comparative Study with Human Experts
3
Zitationen
14
Autoren
2023
Jahr
Abstract
Abstract Objectives The objective of this study is to evaluate ChatGPT’s accuracy and reliability in answering complex medical questions related to occupational health and to explore the implications and limitations of AI in occupational health medicine while providing recommendations for future research in this area and informing decision-makers about AI impact in healthcare. Methods A group of physicians was enlisted to create a dataset of questions and answers on Italian occupational medicine legislation. They were divided into two teams, each assigned to a different subject area. ChatGPT was used to generate answers for each question with/without legislative context. The doctors evaluated human and generated answers in blind, with both teams reviewing each other’s work. Results Occupational physicians perform better than ChatGPT in generating accurate questions on a 5-point likert score, while ChatGPT with access to legislative context is comparable to professional doctors in providing complete answers. Still, we found that users tend to prefer answers generated by humans, indicating that while ChatGPT is useful, users still value the opinions of occupational medicine professionals. Conclusions The study evaluated ChatGPT’s effectiveness in occupational medicine and identified crucial factors for its responsible use. It emphasizes ongoing dialogue and reflection for AI development in healthcare. ChatGPT provides 24/7 assistance to occupational physicians, increasing efficiency and reducing costs, monitoring workers’ health, and offering personalized service. It has the potential to transform occupational medicine and create safer work environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.