Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Students Collaboratively Prompting ChatGPT
3
Zitationen
2
Autoren
2025
Jahr
Abstract
This study investigated how undergraduate students collaborated when working with ChatGPT and what teamwork approaches they used, focusing on students’ preferences, conflict resolution, reliance on AI-generated content, and perceived learning outcomes. In a course on the Applications of Information Systems, 153 undergraduate students were organized into teams of 3. Team members worked together to create a report and a presentation on a specific data mining technique, exploiting ChatGPT, internet resources, and class materials. The findings revealed no strong preference for a single collaborative mode, though Modes #2, #4, and #5 were marginally favored due to clearer structures, role clarity, or increased individual autonomy. Students reasonably encountered initial disagreements (averaging 30.44%), which were eventually resolved—indicating constructive debates that improve critical thinking. Data also showed that students moderately modified ChatGPT’s responses (50% on average) and based nearly half (44%) of their overall output on AI-generated content, suggesting a balanced yet varied level of reliance on AI. Notably, a statistically significant relationship emerged between students’ perceived learning and actual performance, implying that self-assessment can complement objective academic measures. Students also employed a diverse mix of communication tools, from synchronous (phone calls) to asynchronous (Instagram) and collaborative platforms (Google Drive), valuing their ease of use but facing scheduling, technical, and engagement issues. Overall, these results reveal the need for flexible collaborative patterns, more supportive AI use policies, and versatile communication methods so that educators can apply collaborative learning effectively and maintain academic integrity.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.