Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Perception, performance, and detectability of conversational artificial intelligence across 32 university courses
7
Zitationen
39
Autoren
2023
Jahr
Abstract
The emergence of large language models has led to the development of powerful tools such as ChatGPT that can produce text indistinguishable from human-generated work. With the increasing accessibility of such technology, students across the globe may utilize it to help with their school work -- a possibility that has sparked discussions on the integrity of student evaluations in the age of artificial intelligence (AI). To date, it is unclear how such tools perform compared to students on university-level courses. Further, students' perspectives regarding the use of such tools, and educators' perspectives on treating their use as plagiarism, remain unknown. Here, we compare the performance of ChatGPT against students on 32 university-level courses. We also assess the degree to which its use can be detected by two classifiers designed specifically for this purpose. Additionally, we conduct a survey across five countries, as well as a more in-depth survey at the authors' institution, to discern students' and educators' perceptions of ChatGPT's use. We find that ChatGPT's performance is comparable, if not superior, to that of students in many courses. Moreover, current AI-text classifiers cannot reliably detect ChatGPT's use in school work, due to their propensity to classify human-written answers as AI-generated, as well as the ease with which AI-generated text can be edited to evade detection. Finally, we find an emerging consensus among students to use the tool, and among educators to treat this as plagiarism. Our findings offer insights that could guide policy discussions addressing the integration of AI into educational frameworks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
- Hazem Ibrahim
- Fengyuan Liu
- Rohail Asim
- Balaraju Battu
- Sidahmed Benabderrahmane
- Bashar Alhafni
- Wifag Adnan
- Tuka Alhanai
- Bedoor AlShebli
- Riyadh Baghdadi
- Jocelyn J. Bélanger
- Elena Beretta
- Kemal Çelik
- Moumena Chaqfeh
- Mohammed F. Daqaq
- Zaynab El Bernoussi
- Daryl Fougnie
- Borja García de Soto
- Alberto Gandolfi
- András György
- Nizar Habash
- J. Andrew Harris
- Aaron Kaufman
- Lefteris M. Kirousis
- Korhan Koçak
- Kangsan Lee
- Seung-Ah Lee
- Samreen Malik
- Michail Maniatakos
- David Melcher
- Azzam Mourad
- Minsu Park
- Mahmoud Rasras
- Alicja Reuben
- Dania Zantout
- Nancy W. Gleason
- Kinga Makovi
- Talal Rahwan
- Yasir Zaki