Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Engineering Students' Experiences With ChatGPT to Generate Code for Disciplinary Programming
1
Zitationen
4
Autoren
2025
Jahr
Abstract
ABSTRACT Large Language Models (LLMs) are transforming several aspects of our lives, including text and code generation. Their potential as “copilots” in computer programming is significant, yet their effective use is not straightforward. Even experts may have to generate multiple prompts before getting the desired output, and the code generated may contain bugs that are difficult for novice programmers to identify and fix. Although some prompting methods have been shown to be effective, the primary approach still involves a trial‐and‐error process. This study explores mechanical engineering students' experiences after engaging with ChatGPT to generate code for the Finite Element Analysis (FEA) course, aiming to provide insights into integrating LLMs into engineering education. The course included a scaffolded progression for students to develop an understanding of MATLAB programming and the implementation of FEA algorithms. After that, the students engaged with ChatGPT to automatically generate a similar code and reflected on their experiences of using this tool. We designed this activity guided by the productive failure framework: since LLMs do not necessarily produce correct code from a single prompt, students would need to use these failures to give feedback, potentially increasing their own understanding of MATLAB coding and FEA. The results suggest that while students find ChatGPT useful for efficient code generation, they struggle to: (1) understand a more sophisticated algorithm compared to what they had experienced in class; (2) find and fix bugs in the generated code; (3) learn about disciplinary concepts while they are also trying to fix the code; and (4) identify effective prompting strategies to instruct the ChatGPT how to complete the task. While LLMs show promise in supporting coding tasks for both professionals and students, using them requires strong background knowledge. When integrated into disciplinary courses, LLMs do not replace the need for effective pedagogical strategies. Our approach involved implementing a use‐modify‐create sequence, culminating in a productive failure activity where students engaged in conversations with the LLM encountered desirable difficulties. Our findings suggest that students faced challenges in trying to get a correct working code for FEA, and felt like they were teaching the model, which in some cases, led to some frustration. Thus, future research should explore additional forms of support and guidance to address these issues.
Ähnliche Arbeiten
Determining Sample Size for Research Activities
1970 · 17.665 Zit.
Scale Development : Theory and Applications
1991 · 14.735 Zit.
Online Learning: A Panacea in the Time of COVID-19 Crisis
2020 · 4.917 Zit.
Systematic review of research on artificial intelligence applications in higher education – where are the educators?
2019 · 4.439 Zit.
Blended learning: Uncovering its transformative potential in higher education
2004 · 4.406 Zit.