Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Beyond boundaries: exploring a generative artificial intelligence assignment in graduate, online science courses
5
Zitationen
4
Autoren
2024
Jahr
Abstract
Generative artificial intelligence (GAI) offers increased accessibility and personalized learning, though the potential for inaccuracies, biases, and unethical use is concerning. We present a newly developed research paper assignment that required students to utilize GAI. The assignment was implemented within three online, asynchronous graduate courses for medical laboratory sciences. Student learning was assessed using a rubric, which rated students' effective integration and evaluation of GAI-generated content against peer-reviewed research articles, thus demonstrating their critical thinking and synthesis skills, among other metrics. Overall rubric scores were high, suggesting that learning outcomes were met. After field testing, we administered a 16-item survey about GAI utilization, contribution to learning, and ethical concerns. Data (<i>n</i> = 32) were analyzed, and free-response answers were thematically coded. While 93.8% of respondents found the GAI-generated content to be "very good" or "excellent," 28.1% found inaccuracies, and 68.8% "strongly agreed" or "agreed" that GAI should be allowed to be used as a tool to complete academic assignments. Interestingly, however, only 28.1% "strongly agreed" or "agreed" that GAI may be used for assignments if not explicitly authorized by the instructor. Though GAI allowed for more efficient completion of the project and better understanding of the topic, students noted concerns about academic integrity and the lack of citations in GAI responses. The assignment can easily be modified for different learning preferences and course environments. Raising awareness among students and faculty about the ethical use and limitations of GAI is crucial in today's evolving pedagogical landscape.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.