Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Preparing for ChatGPT: Comparing Student Attitudes on Generative AI in Contrasting Class Instruction
1
Zitationen
2
Autoren
2024
Jahr
Abstract
Abstract AI text generators have inspired confusion, concern, and curiosity among students and educators, particularly after the release of OpenAI's ChatGPT in November 2022. For educators, two essential questions have arisen: "How can we discourage students from using AI to replace their own critical thinking?" and "How can we support appropriate use that deepens critical thinking?" We hypothesize that students will be less likely to rely too heavily on Generative AI to complete their assignments if instructors teach them how to use it effectively and appropriately instead of broadly prohibiting its use. This paper presents the results of a survey on students' perceptions of and experience with Generative AI/ChatGPT. Identical surveys were administered to students in two different sections of the same junior-level writing course for engineering majors. In one section, students were given prior instruction in the focused, ethical use of ChatGPT with a special emphasis on Generative AI's professional impact. These students were then asked to practice prompt engineering using the CLEAR framework described by Lo (2023): Concise, Logical, Explicit, Adaptive, Reflective. In the other section, students were given no specialized instruction in Generative AI tools or prompt engineering but were told that any unauthorized use would be considered plagiarism. By comparing the responses of the two groups, we hope to develop a balanced instructional approach, acknowledging that these tools represent a permanent shift in academic and professional communication without losing sight of our fundamental responsibility as educators—to help students hone their critical thinking skills and develop a deep understanding of their discipline.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.