Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The ChatGPT Fact-Check: exploiting the limitations of generative AI to develop evidence-based reasoning skills in college science courses
6
Zitationen
3
Autoren
2025
Jahr
Abstract
Generative large language models (LLMs) like ChatGPT can quickly produce informative essays on various topics. However, the information generated cannot be fully trusted, as artificial intelligence (AI) can make factual mistakes. This poses challenges for using such tools in college classrooms. To address this, an adaptable assignment called the ChatGPT Fact-Check was developed to teach students in college science courses the benefits of using LLMs for topic exploration while emphasizing the importance of validating their claims based on evidence. The assignment requires students to use ChatGPT to generate essays, evaluate AI-generated sources, and assess the validity of AI-generated scientific claims (based on experimental evidence in primary sources). The assignment reinforces student learning around responsible AI use for exploration while maintaining evidence-based skepticism. The assignment meets objectives around efficiently leveraging beneficial features of AI, distinguishing evidence types, and evidence-based claim evaluation. Its adaptable nature allows integration across diverse courses to teach students to responsibly use AI for learning while maintaining a critical stance.<b>NEW & NOTEWORTHY</b> Generative large language models (LLMs) (e.g., ChatGPT) often produce erroneous information unsupported by scientific evidence. This article outlines how these limitations may be leveraged to develop critical thinking and teach students the importance of evaluating claims based on experimental evidence. Additionally, the activity highlights positive aspects of generative AI to efficiently explore new topics of interest, while maintaining skepticism.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.