Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Student-Generated User Story Quality: A Study on Practitioner and ChatGPT Evaluation
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Evaluating the quality of student-generated user stories is important in software engineering education, but only a limited number of industry practitioners can assist. The integration of generative AI can facilitate this process. To do so, the INVEST quality evaluation framework is widely recognized for assessing user story quality; however, prior research has not explored its use in conjunction with generative AI. This study investigated ChatGPT's ability to evaluate user stories using the INVEST framework. This study compares two ChatGPT-based evaluation approaches with those of experienced practitioners, focusing on student-generated user stories. Discrepancies between ChatGPT and practitioner evaluations were measured using Mean Absolute Deviation (MAD), Mean Squared Error (MSE), and Root Mean Squared Error (RMSE). Statistical significance was tested using the Mann-Whitney U Test. The results indicate that ChatGPT’s 1st approach yielded lower discrepancies than practitioner evaluations. Moreover, significance testing showed no statistically significant differences between the ChatGPT and practitioner results for the two INVEST criteria- Independent and Estimable. These findings suggest that the 1st approach can assist in the evaluation process, although practitioners must ensure comprehensive and accurate evaluations. ChatGPT can provide preliminary evaluations in educational contexts, enabling students to receive formative feedback and allowing educators to streamline evaluation processes. Although practitioner validation is still required, their role may shift toward verifying AI-generated results, thus reducing the overall workload and accelerating quality evaluation
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.