Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Front Matter
3
Zitationen
4
Autoren
2025
Jahr
Abstract
Conversational agents (CAs), such as medical interview assistants, are increasingly used in healthcare settings due to their potential for intuitive user interaction.Ensuring the inclusivity of these systems is critical to provide equitable and effective digital health support.However, the underlying technology, models and data can foster inequalities and exclude certain individuals.This paper explores key principles of inclusivity in patient-oriented language processing (POLP) for healthcare CAs to improve accessibility, cultural sensitivity, and fairness in patient interactions.We will outline, how considering the six facets of inclusive Artificial Intelligence (AI) will shape POLP within healthcare CA.Key considerations include leveraging diverse datasets, incorporating gender-neutral and inclusive language, supporting varying levels of health literacy, and ensuring culturally relevant communication.To address these issues, future research in POLP should focus on optimizing conversation structure, enhancing the adaptability of CAs' language and content, integrating cultural awareness, improving explainability, managing cognitive load, and addressing bias and fairness concerns.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.