Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Rapid Integration of LLMs in Healthcare Raises Ethical Concerns: An Investigation into Deceptive Patterns in Social Robots
6
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract Conversational agents are increasingly used in healthcare, with Large Language Models (LLMs) significantly enhancing their capabilities. When integrated into social robots, LLMs offer the potential for more natural interactions. However, while LLMs promise numerous benefits, they also raise critical ethical concerns, particularly regarding hallucinations and deceptive patterns. In this case study, we observed a critical pattern of deceptive behavior in commercially available LLM-based care software integrated into robots. The LLM-equipped robot falsely claimed to have medication reminder functionalities, not only assuring users of its ability to manage medication schedules but also proactively suggesting this capability despite lacking it. This deceptive behavior poses significant risks in healthcare environments, where reliability is paramount. Our findings highlights the ethical and safety concerns surrounding the deployment of LLM-integrated robots in healthcare, emphasizing the need for oversight to prevent potentially harmful consequences for vulnerable populations.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.548 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.299 Zit.