Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the Influence of Chatbots and AI Assistants on Medical Communication and Patient Trust
0
Zitationen
6
Autoren
2024
Jahr
Abstract
Using robots and AI helpers in healthcare is changing how patients communicate with medical services. This could be a good way to improve communication, get patients more involved, and maybe even build trust in healthcare delivery. This research looks at how these digital tools affect how doctors and patients talk to each other and trust each other. The quick spread of AI-powered systems in healthcare settings has led to talks about how well they help build real relationships between healthcare workers and patients and how they can make healthcare more accessible and efficient. The main goal of the study is to look at how patients and healthcare workers feel about AI being used in hospital settings, focussing on how much patients trust and are satisfied with the technology. A mixed-method approach was used, with people from a wide range of groups taking part in both quantitative polls and qualitative conversations. People who used AI-based apps and helpers in healthcare settings, such as to check for symptoms, make appointments, and send follow-up messages, were asked to provide data. The study looks into how these tools affect what patients expect, how happy they are with conversation, and how much they believe AI systems that give them medical advice. The results show that patients have mostly good experiences with AI helpers, especially when it comes to things like ease of use, quick answers, and availability 24 hours a day, seven days a week. Concerns about how artificial intelligence would not be able to provide humane treatment and the requirement of human supervision in medical decision-making surfaced, nevertheless. The research claims that in certain cases artificial intelligence might increase trust and connection; yet, it should be utilised cautiously and that patient care still depends much on human contact. Future research should concentrate on making AI-driven systems in healthcare more accurate, sympathetic, and transparent if we are to fully maximise them.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.