Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
To Explain or Not To Explain: An Empirical Investigation of AI-based Recommendations on Social Media Platforms
16
Zitationen
3
Autoren
2024
Jahr
Abstract
Abstract Artificial intelligence integration into social media recommendations has significant promise for enhancing user experience. Frequently, however, suggestions fail to align with users’ preferences and result in unfavorable encounters. Furthermore, the lack of transparency in the social media recommendation system gives rise to concerns regarding its impartiality, comprehensibility, and interpretability. This study explores social media content recommendation from the perspective of end users. To facilitate our analysis, we conducted an exploratory investigation involving users of Facebook, a widely used social networking platform. We asked participants about the comprehensibility and explainability of suggestions for social media content. Our analysis shows that users mostly want explanations when encountering unfamiliar content and wish to be informed about their data privacy and security. Furthermore, users favor concise, non-technical, categorical representations of explanations along with the facility of controlled information flow. We observed that explanations impact users’ perception of the social media platform’s transparency, trust, and understandability. In this work, we have outlined design implications related to explainability and presented a synthesized framework of how various explanation attributes impact user experience. In addition, we proposed another synthesized framework for end user inclusion in designing an explainable interactive user interface.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.