OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 05:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Making Explanations Make Sense: XAI for SMiShing Detection

2026·0 Zitationen·Old Dominion UniversityOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Explainable Artificial Intelligence (XAI) is a key component of effective human-AI collaboration, particularly in high-stakes domains such as cybersecurity. While AI tools hold promise for mitigating threats such as SMS-based phishing (SMiShing), their real-world effectiveness may hinge not just on detection accuracy, but on whether users can make sense of the system’s outputs. As SMiShing attacks grow in both frequency and sophistication, so does the urgency of designing human-centered AI systems that support user decision-making under uncertainty. This study examined how four distinct AI explanation types - Normative (rule-based), Attributive (feature-based), Exemplar (case-based), and Recommendation-Only - influence user performance, confidence, and mental workload in a simulated SMiShing detection task against a No AI baseline. Results showed that all AI-supported conditions improved classification accuracy, with minimal differences across explanation types. Confidence was slightly higher for Exemplar explanations, while subjective mental workload, perceived usability, and willingness to adopt the system did not vary across conditions. These results indicate that AI feedback can enhance decision-making without increasing workload or degrading user experience, while demonstrating that explanation presence may matter more than its style. The findings inform the design of more effective XAI systems that can optimize user decision-making and minimize mental effort when encountering cyber security threats.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen