Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Undergraduates perceive differences in helpfulness and thoroughness of responses of ChatGPT 3.0, Gemini 1.5, and copilot responses about drug interactions
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Abstract This study explored a critical gap in fundamental knowledge of AI/client interactions by asking students to compare the accuracy, thoroughness, and helpfulness of chatbot responses pertaining to the pharmacology of important medications. Eighteen undergraduates enrolled in an introductory pharmacology course at a Midwestern public university used standardized prompts to elicit drug interaction information for five commonly prescribed medications: aspirin, semaglutide, losartan, Yescarta, and a student-selected anesthetic. The chatbots were ChatGPT 3.0, Copilot, and Gemini 1.5. Each student evaluated responses generated by two of three platforms. While all chatbots were rated highly for accuracy, perceptions of helpfulness and thoroughness varied across platforms and prompts. ChatGPT was most consistently rated as thorough and helpful overall, though Gemini outperformed it on select prompts. Comparisons between Copilot and Gemini slightly favored Copilot, but not across all prompts. Taken together, student feedback indicates that the tone and delivery of information may influence perceptions of chatbot helpfulness and completeness. In effect, chatbots’ bedside manner may influence users. Two-thirds of participants indicated they would recommend using AI chatbots to understand medications. These findings underscore the importance of developing patient-centered educational resources that guide effective and ethical use of AI tools in healthcare communication, particularly as AI becomes more consistently integrated into clinical and medical education settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.