OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.04.2026, 15:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Guide of ethical use of LLM generative AI Systems in Higher Education

2025·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

This Guide provides practical, role-specific guidance for the responsible adoption of Large Language Models (LLMs) and Generative AI (GenAI) in higher education. It synthesizes current institutional practices and scholarship to help educators, students, and university leaders make informed, values-aligned decisions about when and how to use GenAI. The work draws on a thematic analysis of materials from 20 leading European universities and on six collaboratively developed case studies gathered within the ADMIT partnership, offering a balanced view of opportunities and risks in real academic settings. The present document is anchored in a taxonomy of eight ethical dimensions: Educational Impact & Academic Integrity; Privacy & Data Governance; Societal, individual & Environmental Wellbeing; Teacher/Student Agency & Oversight; Diversity, Non-discrimination & Fairness; Accountability; Transparency; and Technical Robustness & Safety. These are operationalized through thirty indicators that institutions can use for self-assessment and policy design. It can be read alongside the AI-LD Activity Framework, which complements this ethical lens with concrete support for the design of AI-enabled learning activities. For educators, the report highlights clear benefits (efficiency gains, richer materials, more responsive feedback, and new assessment designs) while underscoring duties around accuracy checking, disclosure, bias awareness, and safeguarding of learner data. It advises treating GenAI as assistive, not substitutive; maintaining human oversight in grading; and avoiding external uploads of student work without institutional approval. For students, it frames GenAI as a tool to enhance (not replace) learning, communication, and coding fluency, while warning against plagiarism, over-reliance, and uncritical acceptance of outdated, biased, or fabricated outputs. It emphasizes disclosure of permitted use and careful handling of personal or confidential data. Lastly, at the institutional level, this handbook sets out the strategic benefits (process modernization, inclusive access, and evidence-informed decision-making) and the key risks, including policy fragmentation, privacy and IP exposure, environmental impact, and inequity from paywalled tools. It converts these into concrete governance measures across hiring, tutoring and support, assessment, and exam integrity among them human review of AI decisions, clear disclosure to users, minimal and regulated data collection, equitable access, and opt-out options in high- stakes contexts.

Ähnliche Arbeiten