OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 12:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Under what influence: Measuring AI influence to fit user profiles in decision-making

2026·1 Zitationen·International Journal of Human-Computer StudiesOpen Access
Volltext beim Verlag öffnen

1

Zitationen

4

Autoren

2026

Jahr

Abstract

Artificial Intelligence (AI) has become a pivotal tool in augmenting human decision-making across various domains, yet its influence on user decisions often lacks comprehensive evaluation. While technical performance metrics such as accuracy and efficiency dominate AI design, integrating human-centered approaches that consider trust and reliance remains underexplored. This study addresses the knowledge gap in understanding how AI systems influence decision-making quality, calibrated to user profiles, including their expertise, skills, professional role, confidence, and reliance tendencies. We present a novel and comprehensive metric framework for evaluating AI influence, emphasizing behavioral patterns and measurable improvements in decision outcomes beyond simple alignment with AI recommendations. The framework is applied to four medical domain case studies—MRI, ECG, X-ray, and ENDO – with user groups spanning specialists, sub-specialists, and trainees. Results reveal that while human and AI systems achieve high agreement rates (up to 81%), AI influence on decision quality varies significantly. Notably, X-ray decision-making showed the highest influence index (0.27), while MRI decisions exhibited substantial self-anchoring bias (6.94), undermining the potential positive impact of AI. Influence metrics unveiled nuances missed by agreement scores, highlighting domain-specific biases and opportunities to optimize AI-human interaction. This research underscores the necessity for adapting the type of AI system and affordance to user characteristics and attitudes of reliance to foster calibrated trust and improve decision outcomes. Our findings inform the design of AI systems that better support diverse user needs and align with human decisions, driving progress toward human-centered AI integration in high-stakes domains. • Developed novel metrics to evaluate AI’s influence beyond user agreement with AI. • Identified biases impacting AI influence, such as self-anchoring and automation bias. • Applied framework to four medical studies with 330 clinicians and 15,000 decisions. • Revealed up to 81% alignment but variances in appropriate reliance and influence. • Highlighted need for adaptive AI systems to match user expertise. • Demonstrated that influence metrics uncover dynamics missed by traditional reliance.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Human-Automation Interaction and Safety
Volltext beim Verlag öffnen