Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A chatbot for the soul: mental health care, privacy, and intimacy in AI-based conversational agents
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Abstract Artificial intelligence-based conversational agents—chatbots—are increasingly integrated into telehealth platforms, employee wellness programs, and mobile applications to address structural gaps in mental health care. While these chatbots promise accessibility, they are often deployed without sufficient impact assessment or even basic user testing. This paper presents a case study using community red-teaming exercises to evaluate a chatbot designed for wellness and spirituality. Unlike traditional red-teaming, which is conducted by engineers to assess vulnerabilities, community red-teaming treats impacted users as experts, uncovering concerns related to privacy, ethics, and functionality. Our fieldwork, conducted with undergraduate beta testers ( n = 28), revealed that participants were often more comfortable sharing private information with the chatbot than with a stranger. Prior experience with commercial AI systems, such as ChatGPT, contributed to this ease. However, participants also raised concerns about misinformation, inadequate guardrails for sensitive topics, data security, and dependency. Despite these risks, users remained open to the chatbot’s potential as a spiritual wellness guide. We further examine how algorithmic impact assessments (AIAs) both capture and overlook key aspects of the spiritual chatbot user experience. These chatbots offer hyper-personalized, AI-mediated divination and wellness interactions, blurring the boundaries between astrology, mental health support, and spiritual guidance. The chameleon-like nature of these technologies challenges conventional assessment frameworks, necessitating a nuanced approach that considers specific use cases, potential user bases, and indirect impacts on broader communities. We argue that AIAs for care and wellness chatbots must account for these complexities, ensuring ethical deployment and mitigating harm.
Ähnliche Arbeiten
Amazon's Mechanical Turk
2011 · 10.016 Zit.
The Transtheoretical Model of Health Behavior Change
1997 · 7.645 Zit.
COVID-19 and mental health: A review of the existing literature
2020 · 3.699 Zit.
Cognitive Therapy and the Emotional Disorders
1977 · 2.931 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.782 Zit.