Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ProCoT: Stimulating Critical Thinking and Writing of Students through Engagement with Large Language Models (LLMs)
1
Zitationen
8
Autoren
2023
Jahr
Abstract
We introduce a novel writing method called Probing Chain-of-Thought (ProCoT), which potentially prevents students from cheating using a Large Language Model (LLM), such as ChatGPT, while enhancing their active learning. LLMs have disrupted education and many other fields. For fear of students cheating, many have resorted to banning their use. These LLMs are also known for hallucinations. We conduct studies with ProCoT in two different courses with 65 students. The students in each course were asked to prompt an LLM of their choice with one question from a set of four and required to affirm or refute statements in the LLM output by using peer-reviewed references. The results show two things: (1) ProCoT stimulates creative/critical thinking and writing of students through engagement with LLMs when we compare the LLM-only output to ProCoT output and (2) ProCoT can prevent cheating because of clear limitations in existing LLMs, particularly ChatGPT, when we compare students' ProCoT output to LLM ProCoT output. We also discover that most students prefer to give answers in fewer words than LLMs, which are typically verbose. The average word counts for students in the first course, ChatGPT (v3.5), and Phind (v8) are 208, 391 and 383, respectively.
Ähnliche Arbeiten
Determining Sample Size for Research Activities
1970 · 17.756 Zit.
Scale Development : Theory and Applications
1991 · 14.741 Zit.
Online Learning: A Panacea in the Time of COVID-19 Crisis
2020 · 4.936 Zit.
Systematic review of research on artificial intelligence applications in higher education – where are the educators?
2019 · 4.624 Zit.
Blended learning: Uncovering its transformative potential in higher education
2004 · 4.417 Zit.