OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 04:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

In AI We Trust? Exploring the Role of Explainable GenAI and Expertise in Education

2025·0 Zitationen·Human Factors The Journal of the Human Factors and Ergonomics Society
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

ObjectiveWe examine AI trust miscalibration-the discrepancy between an individual's trust in AI and its actual performance-among university students. We assess how the length of explanations and students' expertise shape the likelihood of alignment with AI recommendations.BackgroundThe relationship between explainability and users' trust in AI systems has been scarcely addressed in the current literature, even though AI-assisted processes increasingly affect all professions and hierarchical levels. Given that human-AI relationships are often formed during education, it is crucial to understand how individual and contextual factors influence students' assessment of AI outputs.MethodWe conducted in-class experiments with 248 students from multiple universities. Participants solved GMAT questions, then viewed an AI recommendation-sometimes correct, sometimes incorrect-with varying explanation depth and eventually could revise their initial answer; student's final answer being in line with AI recommendation operationalized our measure of "trust." We estimated logistic models with control variables, including mixed-effects specifications to account for repeated observations.ResultsExplanation complexity is associated with higher trust on average, but its relevance depends on who reads it and whether AI is correct. Students who previously answered correctly exhibited lower willingness to defer, especially when AI was incorrect; conversely, agreement and consistency effects significantly amplified trust. These behavioral patterns highlight conditions under which AI-generated explanations can foster critical engagement or conversely encourage uncritical acceptance.ConclusionOur results point to a "AI knows better" heuristic at work-especially among nonexperts-where polished presentation is easily read as reliability, encouraging uncritical agreement with incorrect recommendations; in parallel, experts benefit more from deeper rationales when AI is accurate, yet still display under-reliance of correct assistance in many cases. Overall, trust calibration is driven less by any single cue than by the alignment of student performance, AI reliability, and explanation design, with prior agreement acting as a powerful amplifier of subsequent alignment.ApplicationOur findings imply that instructional approaches should promote independent reasoning before exposure to AI, deploy concise but diagnostically informative explanations, and include brief verification steps before accepting AI recommendations, especially for nonexperts who are more prone to harmful switches. Simple monitoring tools that track helpful versus harmful changes could support a more discerning and productive use of AI tools.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen