OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.04.2026, 19:40

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Prompt engineering as cognitive scaffolding for ethical and explanatory quality in AI-mediated financial learning

2026·0 Zitationen·Discover EducationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Large language models (LLMs) are increasingly used in personal finance education, yet the role of instructional prompt design in shaping explanatory quality and ethical responsiveness remains underexplored. This matters because trustworthy financial guidance and instruction depend on clarity, accuracy, and appropriately cautionary framing. We conducted a Design-Based Research (DBR) Phase 1 artefact-validation study using content analysis of 120 GPT-4-turbo (May 2025) generations. Ninety diagnostic prompts varied linguistic framing, cognitive level, and ethical intent; thirty Phase 1b prompts instantiated a derived taxonomy. Two expert reviewers scored outputs (0–3) for financial accuracy, conceptual clarity, and ethical tone (inter-rater reliability: $$\kappa =0.82$$). Prompt form was a pedagogical determinant of content quality. Procedural prompts outperformed interrogatives on clarity ($$U=418$$, $$p<.001$$, $$r=.42$$) and accuracy ($$U=457$$, $$p<.01$$, $$r=.31$$). Ethical anchoring raised ethical tone (median 2.78 vs. 2.31; Cliff’s $$\delta =.46$$, $$p<.001$$) with a modest accuracy trade-off ($$\delta =-.18$$, $$p=.04$$). Taxonomy-guided prompting increased medians for clarity ($$+0.46$$ to 2.92), accuracy ($$+0.41$$ to 2.81), and ethical tone ($$+0.37$$ to 2.71), and reduced low-quality outliers from $$18\%$$ to $$4\%$$ ($$\chi ^2=7.91$$, $$p=.005$$), evidencing higher instructional reliability at the content level. The prompt taxonomy, validated here at the content level, operationalises three design principles - Guided Explanation, Contextualised Inquiry, and Comparative Reasoning. These links prompt forms to instructional functions via structural scaffolding and situated relevance. Treating prompt engineering as instructional design can systematically improve AI-mediated educational content in high-stakes finance. DBR Phase 2 will test impacts on learner outcomes (decision quality, self-regulation, calibrated trust) and robustness across models and retrieval-augmented settings.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Decision-Making and Behavioral Economics
Volltext beim Verlag öffnen