Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Patterns of ChatGPT Use and Attitudes Toward AI in Medical Education: Findings From a Cross-Sectional Survey
0
Zitationen
12
Autoren
2025
Jahr
Abstract
<title>Abstract</title> <bold>Background</bold> Large language models (LLMs) such as ChatGPT are increasingly used by medical students, yet empirical evidence on real-world adoption, perceived value, and institutional factors supporting responsible use remains limited. <bold>Objective</bold> To characterize awareness, frequency and purposes of ChatGPT use among medical students; examine associations with comfort, confidence, time saved, and training stage; and identify student- and institution-level factors linked to use. <bold>Methods</bold> Cross-sectional, anonymous e-survey of medical students (N=612). Descriptives summarized demographics, use cases, and attitudes. Bivariate tests (χ² with Cramér’s V; Spearman’s ρ with 95% CIs) assessed associations. Logistic regression (outcome: uses ChatGPT yes/no) provided univariable and multivariable adjusted odds ratios (aOR) controlling for age, gender, and years in university. <bold>Results</bold> Awareness of ChatGPT was near-universal (96.4%); 59.0% reported use at least several times/week. Common use cases were information gathering (64.4%) and clarifying complex concepts (58.0%); exam preparation (34.2%) and creating study aids (28.4%) were less frequent, with communication simulations (17.5%), academic writing (14.2%), and clinical documentation (12.9%) least used. AI-use frequency differed by gender (p=0.015, V=0.12) and by academic year (p=0.008, V=0.13), peaking after 3 years of medical education; it did not differ by prior years studied online. Integration of ChatGPT into routine study correlated with comfort (ρ=0.469, p<0.001), perceived confidence increase (ρ=0.437, p<0.001), and more time saved (ρ=0.226, p<0.001). In multivariable models, higher motivation (OR=1.48 per point, 95% CI 1.25–1.75, p<0.001), awareness of institutional AI policies (OR=2.53, 1.41–4.53, p=0.002), and awareness of support/resources (OR=2.28, 1.28–4.09, p=0.005) were independently associated with being a СhatGPT user; disciplinary consequences, self-rated performance, and perceiving ethical issues were not. <bold>Conclusions</bold> Medical students commonly and pragmatically integrate ChatGPT as a study assistant, especially for information seeking and explanation, with greater comfort, confidence, and time efficiency among routine users. Institutional levers matter: clear policies and visible support are linked to adoption beyond individual motivation. Findings support enabling, guidance-oriented integration and targeted onboarding for earlier-year students.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.