Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence and Responsible Adoption in Engineering Education: Evidence, Concerns, and a Constructive Path Forward
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The rapid integration of generative artificial intelligence (AI) into educational practice has generated both enthusiasm and apprehension. For Computer Applications in Engineering Education (CAE), a journal founded on the premise that computational technologies can enhance learning effectiveness, the present moment represents not a disruption of mission, but an inflection point. Among the most frequently expressed concerns is academic integrity, and consequently the potential erosion of critical thinking skills. Has generative AI fundamentally increased cheating, or has it primarily transformed the mechanisms through which academic misconduct may occur? A balanced examination of available evidence suggests a more nuanced picture than public discourse often conveys. Recent survey data confirm that generative AI use among students is widespread. The Higher Education Policy Institute [1] reports that over 90% of surveyed UK students use generative AI tools for academic purposes. Similarly, the College Board [2] reports that more than 80% of US high school students use generative AI for school-related work. Even adult learners have reported using AI for academic work [3]. AI use is no longer peripheral; it is mainstream. Large-scale submission analytics further demonstrate measurable AI integration into student work. Turnitin [4] reports that approximately 17% of global submissions exhibit substantial AI-writing indicators. Yet adoption alone does not equate to misconduct. Emerging empirical research suggests that academic dishonesty rates may not have dramatically increased following the release of large language models. A recent study in Computers & Education found that self-reported cheating behaviors among secondary students remained statistically comparable pre- and post-ChatGPT introduction, suggesting transformation rather than explosion of misconduct patterns (e.g., comparative analyses reported in 2024). Similarly, scholars writing in the Journal of Engineering Education argue that generative AI challenges assessment design more than it fundamentally alters student ethics [5, 6]. Educator concern nevertheless remains high. The 2025 AI Index Report from Stanford's Institute for Human-Centered AI identified academic integrity and misuse as primary concerns among teachers and administrators [7]. The central tension is therefore not only uncertainty about AI use, but also uncertainty about assessment resilience. On the other hand, recent studies indicate that students in higher education use AI tools, but lack structured support and formal training skills [8]. Students want clearer institutional support, guidance, and preparation for responsible AI use and future careers. [9]. In contrast, other studies have reported on students' feelings of guilt, shame, and fear of using generative AI for academic work [10]. Thus, it is imperative that educators take action and deliver concrete guidance to students. AI-detection systems have been rapidly deployed. However, vendors and standards organizations caution against treating automated outputs as definitive evidence. The National Institute of Standards and Technology [11] emphasized broader reliability and risk-management challenges inherent in evolving AI systems. False positives, paraphrasing, hybrid human–AI writing, and model drift complicate enforcement decisions. Peer-reviewed discussions in engineering education similarly caution that reliance on detection technologies may produce procedural fairness concerns and may inadvertently penalize multilingual or stylistically distinctive writers [6]. Detection tools may serve as preliminary screening mechanisms, but they cannot replace sound pedagogical design. International policy guidance increasingly advocates for governance frameworks grounded in transparency and AI literacy rather than prohibition. UNESCO [12] recommends clear institutional policies, disclosure practices, and educator capacity building. Process-based evaluation (drafts, checkpoints, design notebooks). Reflective memos documenting reasoning and iteration. Oral defense components or short explanatory interviews. Contextualized assignments tied to local data or laboratory work. Explicit AI-use disclosure expectations. Integration of AI literacy as a learning outcome. Such strategies shift evaluation from determining whether AI was used to determining whether understanding has been demonstrated. Engineering education occupies a uniquely advantageous position in this transition. For more than three decades, CAE has promoted simulation-driven learning, computational modeling, and digital laboratories. Generative AI may be understood as a continuation of this computational trajectory. The core pedagogical question is not whether students consult AI systems, but whether they use them without forgoing learning, and whether assessments effectively measure modeling judgment, parameter selection, validation reasoning, and design trade-offs. These competencies resist superficial outsourcing. As Magana et al. [13] argue in the Journal of Engineering Education, generative AI can be integrated productively into engineering research and learning workflows when guided by structured pedagogical frameworks. Engineering education, therefore, may serve as a proving ground for responsible AI integration rather than a casualty of its misuse. Students can also be equipped with strategies that provide them with learning agency when using generative AI, so that they can develop self-regulated learning in this context. That is, students develop agency when they feel confident when using generative AI (dispositional agency), when they have access to generative AI tools, along with institutional support (positional), and when they have motivation, goals, and choice in their uses of generative AI (motivational) [14]. Once students develop such forms of learning agency, they can develop the capability to self-regulate their learning when using such tools, so that they plan, monitor, and evaluate the consequences of using them for academic work, without sacrificing their learning [15]. Rather than framing generative AI solely as a threat to academic integrity, CAE advocates for principled innovation grounded in evidence, transparency, and pedagogical rigor. The responsibility before engineering educators is not to retreat from technological change, but to shape it. Develop transparent AI-use policies aligned explicitly with course learning objectives and professional ethics. Redesign assessments to emphasize reasoning, modeling judgment, iteration, validation, and design trade-offs. Integrate AI literacy and AI learning agency as a technical and professional competency within engineering curricula. Conduct rigorous empirical studies evaluating AI-integrated assessment frameworks. Disseminate validated practices through peer-reviewed scholarship that distinguishes evidence from anecdote. Generative AI is unlikely to recede from educational environments. The central question is therefore not whether AI will be present, but whether engineering education will lead in defining its responsible, pedagogical, and effective use. Since its founding in 1992, CAE has consistently advanced the thoughtful integration of computational tools, simulation environments, multimedia learning modules, virtual laboratories, and data-driven instructional strategies. Each technological wave—from desktop computing to web-based learning, from CAD systems to high-fidelity modeling—initially raised concerns about rigor, dependency, and integrity. In each case, engineering education responded not by lowering standards, but by refining them. Generative AI represents the next phase in this computational evolution. Upholding academic integrity through pedagogical strength rather than technological surveillance alone. Promoting research that differentiates responsible and pedagogical AI integration from misuse. Identifying and validating strategies for generative AI uses that empower students to succeed in modern engineering workplaces, without compromising their learning. Encouraging assessment models that measure deep understanding rather than surface production. Providing a scholarly forum where innovation is examined with methodological rigor and professional dignity. Leading international dialog on AI in engineering education grounded in evidence, not rhetoric. In doing so, CAE does not position itself as reacting to the AI wave, but as continuing a long-standing mission: advancing digital technologies to enhance learning effectiveness and elevate engineering education globally. The integrity of engineering education will not be preserved by resisting AI, but by embedding it within principled, research-based pedagogy. The opportunity before us is not merely to manage risk, but to define standards. CAE stands committed to leading this effort—thoughtfully, rigorously, and with the dignity befitting a journal that has served the field for over three decades. The challenge is real. The opportunity is greater. The responsibility is ours. Generative AI tools were used during manuscript preparation to assist in identifying and synthesizing publicly available literature related to artificial intelligence in engineering education. The author takes full responsibility for the interpretation and conclusions presented. References are provided for all cited literature. This work was supported in part by the U.S. National Science Foundation under award numbers 2434429 and 2315683. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Science Foundation. The authors declare no conflicts of interest. The data that support the findings of this study are available from the corresponding author upon reasonable request.
Ähnliche Arbeiten
International Journal of Scientific and Research Publications
2022 · 2.691 Zit.
Student writing in higher education: An academic literacies approach
1998 · 2.511 Zit.
Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling
2012 · 2.315 Zit.
How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data
2009 · 1.924 Zit.
Chatting and cheating: Ensuring academic integrity in the era of ChatGPT
2023 · 1.840 Zit.