Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When AI Changes its Tone, Does Acceptance Follow?
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The integration of artificial intelligence (AI) into decision-making processes raises numerous questions about acceptance, particularly in hospital environments marked by strong professional identities. This working paper presents an ongoing experimental study that explores how the conversational tone of an AI agent might influence the acceptance of its recommendations, depending on the psychological profile of the healthcare professional. Building on a 2x2x2 model structured around three dimensions of ego (personal value, perceived competence and social role), the study aims to demonstrate that adapting the tone can significantly improve acceptance to algorithmic recommendations. The experimental protocol involves profession-specific scenarios in which participants assess a series of AI-generated messages, each introduced during the option evaluation stage of the decision-making process. By proposing a novel identity-based approach to AI design, this study seeks to contribute to both theory and practice. It is expected to open perspectives for the development of conversational agents capable of dynamically adjusting their tone to enhance acceptability and support professional autonomy in hospital settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.