OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 09:01

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating and Improving Prompt Quality in LLM-Based Assistants: A Synthesis of Criteria and Indicators

2025·0 Zitationen·ScholarSpace (University of Hawaii at Manoa)
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Generative AI (GenAI) assistants, particularly large language models (LLMs), are gaining increasing relevance across domains. The quality of outputs generated by these systems is highly contingent on the input prompts, giving rise to new professional roles such as prompt engineers. In this study, we systematically examine evaluation criteria and optimization methods that can improve prompt quality. Drawing on a systematic literature review, we identify key criteria, including clarity, accuracy, and precision, and initial measurement techniques. In addition, we synthesize common optimization methods such as iterative refinement and shot-based prompting. Our work contributes to the growing efforts to standardize the evaluation and improvement of prompts in interactions with LLM-based assistants, thereby fostering a more rigorous and coherent understanding of the prompt quality construct.

Ähnliche Arbeiten

Autoren

Themen

Topic ModelingArtificial Intelligence in Healthcare and EducationAI in Service Interactions
Volltext beim Verlag öffnen