Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI in higher education: A cross-sector analysis of ChatGPT's impact on STEM, social sciences, and healthcare
2
Zitationen
3
Autoren
2025
Jahr
Abstract
The integration of Generative Artificial Intelligence (GenAI) in academic learning has gained substantial traction across disciplines, necessitating a systematic analysis of its impact. This study explored ChatGPT's transformative role in higher education from 2022 onwards, synthesizing empirical findings across twelve distinct academic fields spanning STEM, social sciences, and healthcare. Relevant empirical case studies were identified through a systematic Scopus database search, applying discipline-specific keywords and filtering out surveys, literature reviews, and theoretical papers. Multi-stage screening identified 60 full-text articles, ultimately selecting twelve high-quality studies for rigorous cross-disciplinary analysis. The findings revealed pronounced disciplinary variations in ChatGPT adoption and impact. Quantitative analysis demonstrated that STEM disciplines report significantly higher accuracy concerns (mean = 1.57 on a 0–2 scale) compared to other fields, while healthcare disciplines showed the highest privacy concerns (mean = 2.0). Moderate positive correlation (r = 0.68) exists between academic integrity concerns and usage intensity, with computer science and social science reporting the highest levels for both metrics. Female representation, documented in 50% of studies, appears to influence adoption patterns. Sample sizes varied considerably (n = 12 to n = 430), with computer science (n = 430) and medical education (n = 265) providing robust empirical bases. Cross-disciplinary analysis revealed that ChatGPT enhances academic performance in structured problem-solving contexts, with health sciences reporting the highest positive impact scores (mean = 1.67), whilst potentially undermining critical thinking. Disciplines with text-based assessments face greater academic integrity challenges (r = 0.72 correlation).
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.