Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Why Does Explainability Matter in News Analytic Systems? Proposing Explainable Analytic Journalism
38
Zitationen
1
Autoren
2021
Jahr
Abstract
As the use of algorithms has emerged in journalism, analytic/algorithmic journalism (AJ) has seen rapid development in major news organizations. Despite this surging trend, little is known about the role and the effects of explainability on the process by which people perceive and make sense of trust in an algorithm-driven AI system. While AJ has greatly benefited from increasingly sophisticated algorithm technologies, AJ suffers from a lack of transparency and understandability for readers. We identify explainability as a heuristic cue of an algorithm and conceptualizes it in relation to trust by testing how it affects user emotion with AJ. Our experiments show that the addition of interpretable explanations leads to enhanced trust in the context of AJ and readers' trust hinges upon the perceived normative values that are used to assess algorithmic qualities. Explanations of why certain news articles are recommended give users emotional assurance and affirmation. Mediation analyses show that explanatory cues play a mediating role between trust and performance expectancy. The results have implications for the inclusion of explanatory cues in AJ, which help to increase credibility and help users to assess AJ value.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.