Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Prompt engineering as cognitive scaffolding for ethical and explanatory quality in AI-mediated financial learning
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Large language models (LLMs) are increasingly used in personal finance education, yet the role of instructional prompt design in shaping explanatory quality and ethical responsiveness remains underexplored. This matters because trustworthy financial guidance and instruction depend on clarity, accuracy, and appropriately cautionary framing. We conducted a Design-Based Research (DBR) Phase 1 artefact-validation study using content analysis of 120 GPT-4-turbo (May 2025) generations. Ninety diagnostic prompts varied linguistic framing, cognitive level, and ethical intent; thirty Phase 1b prompts instantiated a derived taxonomy. Two expert reviewers scored outputs (0–3) for financial accuracy, conceptual clarity, and ethical tone (inter-rater reliability: $$\kappa =0.82$$). Prompt form was a pedagogical determinant of content quality. Procedural prompts outperformed interrogatives on clarity ($$U=418$$, $$p<.001$$, $$r=.42$$) and accuracy ($$U=457$$, $$p<.01$$, $$r=.31$$). Ethical anchoring raised ethical tone (median 2.78 vs. 2.31; Cliff’s $$\delta =.46$$, $$p<.001$$) with a modest accuracy trade-off ($$\delta =-.18$$, $$p=.04$$). Taxonomy-guided prompting increased medians for clarity ($$+0.46$$ to 2.92), accuracy ($$+0.41$$ to 2.81), and ethical tone ($$+0.37$$ to 2.71), and reduced low-quality outliers from $$18\%$$ to $$4\%$$ ($$\chi ^2=7.91$$, $$p=.005$$), evidencing higher instructional reliability at the content level. The prompt taxonomy, validated here at the content level, operationalises three design principles - Guided Explanation, Contextualised Inquiry, and Comparative Reasoning. These links prompt forms to instructional functions via structural scaffolding and situated relevance. Treating prompt engineering as instructional design can systematically improve AI-mediated educational content in high-stakes finance. DBR Phase 2 will test impacts on learner outcomes (decision quality, self-regulation, calibrated trust) and robustness across models and retrieval-augmented settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.456 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.332 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.779 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.533 Zit.