OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 14:14

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Exploring the impact of artificial intelligence on psychological well-being and inclusivity in higher education

2026·0 Zitationen·Journal of Information Communication and Ethics in Society
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Purpose This study aims to examine how institutional and individual capabilities shape ethical artificial intelligence (AI) readiness, psychological well-being (PWB) and inclusivity (INC) among students as higher education adopts AI tools. Drawing on a socio-technical systems framework, the authors investigate how organizational conditions and student perceptions jointly influence inclusive outcomes in AI-enabled learning environments. Design/methodology/approach Survey data from students and faculty across multiple higher education institutions were used to test a structural model with latent constructs. These constructs are institutional support (IS), AI integration capability (AIC), digital literacy (DL), ethical concerns (EC), perceived educational value (PEV), psychological safety in AI use (PSAI), trust in AI systems (TAIS), academic flourishing (AF), PWB and INC. Partial least squares structural equation modeling was used to estimate relationships and explanatory power. Findings IS, AIC and DL significantly enhance students’ and faculty perceptions of the educational value of AI, psychological safety and TAIS. These perceptual mediators, in turn, positively influence perceived inclusion, AF and PWB. The model explains substantial variance in key outcomes (R²: PEV = 0.62, TAIS = 0.67, PSAI = 0.59, AF = 0.61, PWB = 0.64, INC = 0.60). Research limitations/implications This study is based on a cross-sectional survey, which limits the ability to draw causal conclusions. The sample, while diverse, was confined to US-based higher education institutions, which may affect the generalizability of findings across global contexts. In addition, reliance on self-reported data introduces potential for response bias. Future research should use longitudinal and cross-cultural designs to explore how perceptions of AI evolve over time and in varied settings. Despite these limitations, the study offers a replicable ethical framework and empirical model that can inform responsible AI adoption and evaluation practices in higher education. Practical implications The findings offer actionable guidance for higher education institutions implementing AI. Investing in DL and ethical AI integration enhances student trust, well-being and inclusion. Institutions should prioritize transparent, explainable systems that align with human values and provide user-centered experiences. EC must be addressed proactively through governance frameworks, data privacy protections and inclusive design practices. Trust in AI is not automatic; it must be cultivated through participatory implementation, clear communication and attention to student agency. Embedding ethics-by-design into AI deployment supports student flourishing and ensures equitable access to the benefits of educational technologies. Social implications This study highlights the broader social consequences of AI adoption in education, particularly regarding equity, inclusion and mental well-being. AI systems that lack transparency or fairness can exacerbate existing digital divides and disproportionately disadvantage marginalized students. Conversely, ethically aligned AI – designed with justice, autonomy and psychological safety in mind – can foster more inclusive and supportive learning environments. Institutions have a social responsibility to ensure that AI tools do not replicate systemic biases but instead promote dignity, accessibility and human flourishing. The findings advocate for participatory, justice-oriented approaches to educational technology governance that prioritize collective well-being over efficiency alone. Originality/value The study extends AI adoption research in higher education by integrating psychological safety, trust, ethics and inclusivity into a socio-technical model of AI readiness. It shows that IS aligned with values and DL that positions students as critical agents is as important as technical proficiency for the responsible use of AI. The findings provide actionable implications for institutional strategy, AI policy and curriculum design.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationAI in Service Interactions
Volltext beim Verlag öffnen