Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Biased memory retrieval in the service of shared reality with an audience: The role of cognitive accessibility.
4
Zitationen
4
Autoren
2024
Jahr
Abstract
After communicators have tuned a message about a target person's behaviors to their audience's attitude, their recall of the target's behaviors is often evaluatively consistent with their audience's attitude. This audience-congruent recall bias has been explained as the result of the communicators' creation of a shared reality with the audience, which helps communicators to achieve epistemic needs for confident judgments and knowledge. Drawing on the "Relevance Of A Representation" (ROAR) model of cognitive accessibility from motivational truth relevance, we argue that shared reality increases the accessibility of information consistent (vs. inconsistent) with the audience's attitude. We tested this prediction with a novel reaction time task in three experiments employing the saying-is-believing paradigm. Faster reactions to audience-consistent (vs. audience-inconsistent) information were found for trait information but not for behavioral information. Thus, an audience-congruent accessibility bias emerged at the level at which impressions and judgments of other persons are typically organized. Consistent with a shared-reality account, the audience-consistent accessibility bias correlated with experienced shared reality with the audience about the target person and with epistemic trust in the audience. These findings support the view that the creation of shared reality with an audience triggers a basic cognitive mechanism that facilitates the retrieval of audience-congruent (vs. audience-incongruent) trait information about a target person. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.456 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.332 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.779 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.533 Zit.