OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 22:10

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Promises and Pitfalls of LLMs as Feedback Providers: A Study of Prompt Engineering and the Quality of AI-Driven Feedback

2023·41 ZitationenOpen Access
Volltext beim Verlag öffnen

41

Zitationen

2

Autoren

2023

Jahr

Abstract

Artificial intelligence (AI) in higher education (HE) is reshaping teaching and learning, and feedback provided by large language models (LLMs) seems to have an impact on student learning. However, few empirical studies have compared the quality of LLM feedback with the feedback quality of real persons. Therefore, this study addresses the following questions: What prompts are needed to ensure high-quality LLM feedback in HE? How does feedback from novices, experts, and LLMs differ in terms of quality and content accuracy? We developed a learning goal with three errors and a theory-based manual to evaluate prompt quality. Specifically, three prompts of varying quality were created and used to generate feedback from ChatGPT-4. We provided the highest-quality prompt to novices and experts. Our results showed that only the best prompt produced consistently high-quality feedback. Additionally, LLM and expert feedback were significantly better than novice feedback, with LLM feedback being both faster and better than expert feedback in the categories of explanation, questions, and specificity. This suggests that LLM feedback can be a high-quality and efficient alternative to expert feedback. However, we postulate that prompt quality is crucial, highlighting the need for prompting guidelines and human expertise.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsText Readability and Simplification
Volltext beim Verlag öffnen