Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
What might we learn about autobiographical narrative processing from Artificial Intelligence?
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Human beings are innately storytellers. Large Language Model AI algorithms (LLMs) like ChatGPT are not. But what ChatGPT produces, when instructed to generate or interpret a narrative, is powerful and can provide insight into human narrative processing, because it is trained to produce narratives from human sources. Our paper explores these ideas. In the first section we review key concepts from narrative identity theory and research, including a recent study from our multi-institution collaboration exploring similarities and differences in human and AI narrative processing. The current study extends this work by exploring people’s perceptions of ChatGPT interpretations of human personal narratives. Across these studies we find that people are adept at differentiating between human and ChatGPT-generated self-defining memory narratives and this may be because ChatGPT relies on redemptive tone and structure, a culturally dominant master narrative, in ways that seemed inauthentic and uncanny to human participants. Discussion focuses on key questions this work raises for research on narrative meaning-making, authenticity through narrative identity, and therapeutic uses of AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.