Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence-Based Decision Support Systems and Managerial Performance
0
Zitationen
3
Autoren
2024
Jahr
Abstract
This quantitative, cross-sectional, case-based study addresses the persistent problem that organizations invest in cloud-deployed, enterprise AI-based decision support systems (AI-DSS) yet often lack clear, measurable evidence about which adoption and perception factors actually translate into better managerial performance; accordingly, the purpose was to test a Technology Acceptance Model (TAM) aligned explanatory model linking managers’ perceptions and use of AI-DSS to decision-linked performance outcomes in an enterprise case context. The sample comprised 120 managers across core enterprise functions (Operations 30.0%, Finance 23.3%, Sales/Marketing 18.4%, HR/Admin 15.0%, IT/Analytics 13.3%) with mixed seniority (Supervisors 33.3%, Mid-level 48.3%, Senior 18.4%), and most had received AI-DSS training (61.7%). Key variables included Perceived Usefulness (PU), Perceived Ease of Use (PEOU), Trust in AI-DSS (TRUST), Information Quality (INFOQ), and AI-DSS Use/Adoption Maturity (USE) as predictors, with Managerial Performance (MP) as the outcome. The analysis plan applied descriptive statistics, scale reliability, Pearson correlations, and multiple regression, followed by robustness diagnostics (multicollinearity, residual independence, influence, and stability tests). Results showed generally favorable perceptions and outcomes (PU M=4.02, SD=0.62; PEOU M=3.69, SD=0.66; TRUST M=3.74, SD=0.73; INFOQ M=3.91, SD=0.60; USE M=3.78, SD=0.71; MP M=3.88, SD=0.58). Correlations with MP were positive and statistically strong (PU r=0.62, INFOQ r=0.58, TRUST r=0.55, USE r=0.49, PEOU r=0.41, all reported as significant at p<.001). The regression model explained 53% of variance in managerial performance (R²=0.53, F=38.7, p<0.001), with PU (β=0.29, p<0.001), INFOQ (β=0.24, p=0.002), TRUST (β=0.21, p=0.004), and USE (β=0.17, p=0.011) emerging as significant unique predictors, while PEOU was not significant in the full model (β=0.08, p=0.143). Consistent with these effects, higher adoption maturity aligned with higher performance (Low maturity: n=26, MP 3.52; Moderate: n=62, MP 3.86; High: n=32, MP 4.12). Managers reported the most concrete decision benefits in speed (M=3.95, SD=0.72) and accuracy (M=3.89, SD=0.69), with comparatively weaker gains in reduced rework (M=3.61, SD=0.78). Robustness checks supported model credibility (VIF 1.3–2.4, Durbin–Watson 1.95, Cook’s D max 0.21, and stable coefficient signs after excluding extreme USE cases). Implications are that enterprise leaders should prioritize improving AI-DSS usefulness, information quality, and trust-building mechanisms, while accelerating embedded, routine use through governance and workflow integration, because usability alone may be insufficient to drive net performance gains once value and credibility factors are accounted for.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.