Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Making Explanations Make Sense: XAI for SMiShing Detection
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Explainable Artificial Intelligence (XAI) is a key component of effective human-AI collaboration, particularly in high-stakes domains such as cybersecurity. While AI tools hold promise for mitigating threats such as SMS-based phishing (SMiShing), their real-world effectiveness may hinge not just on detection accuracy, but on whether users can make sense of the system’s outputs. As SMiShing attacks grow in both frequency and sophistication, so does the urgency of designing human-centered AI systems that support user decision-making under uncertainty. This study examined how four distinct AI explanation types - Normative (rule-based), Attributive (feature-based), Exemplar (case-based), and Recommendation-Only - influence user performance, confidence, and mental workload in a simulated SMiShing detection task against a No AI baseline. Results showed that all AI-supported conditions improved classification accuracy, with minimal differences across explanation types. Confidence was slightly higher for Exemplar explanations, while subjective mental workload, perceived usability, and willingness to adopt the system did not vary across conditions. These results indicate that AI feedback can enhance decision-making without increasing workload or degrading user experience, while demonstrating that explanation presence may matter more than its style. The findings inform the design of more effective XAI systems that can optimize user decision-making and minimize mental effort when encountering cyber security threats.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.366 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.255 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.122 Zit.