Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Modeling Human Algorithm Interaction to Improve Trust and Reliability of Intelligent Decision Support Systems in Data Driven Organizations
0
Zitationen
3
Autoren
2026
Jahr
Abstract
This research explores the role of human algorithm interaction mechanisms in enhancing trust, reliability, and user confidence in Decision Support Systems (DSS). Traditional DSS models often focus solely on algorithmic accuracy and performance, neglecting crucial factors such as transparency and user engagement, which are essential for building trust. By incorporating explainable AI (XAI) techniques like SHAP and LIME, real-time feedback mechanisms, and user-friendly interfaces, the study develops structured interaction models that improve the interpretability of AI-driven decisions. The results show that transparent decision-making processes and interactive features significantly enhance user trust, making DSS more reliable and easier to adopt. Users interacting with systems that provide clear, understandable explanations of decisions, along with real-time updates on the system’s confidence, reported higher levels of decision-making confidence, especially in high-stakes scenarios. These improvements lead to greater user engagement and adoption of the system in various domains, including healthcare and finance. The study also highlights the importance of balancing interpretability with efficiency in user interface design to ensure both trust and usability. The findings contribute to the design of more user-centric DSS that prioritize trust, interpretability, and cognitive factors, providing a framework for the successful integration of intelligent decision support systems in complex decision-making environments. Future research should focus on refining interaction models and exploring the broader applicability of these systems in different sectors.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.869 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.346 Zit.
"Why Should I Trust You?"
2016 · 14.643 Zit.
Generative adversarial networks
2020 · 13.279 Zit.