OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 20:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Rapid Integration of LLMs in Healthcare Raises Ethical Concerns: An Investigation into Deceptive Patterns in Social Robots

2025·6 Zitationen·Digital SocietyOpen Access
Volltext beim Verlag öffnen

6

Zitationen

2

Autoren

2025

Jahr

Abstract

Abstract Conversational agents are increasingly used in healthcare, with Large Language Models (LLMs) significantly enhancing their capabilities. When integrated into social robots, LLMs offer the potential for more natural interactions. However, while LLMs promise numerous benefits, they also raise critical ethical concerns, particularly regarding hallucinations and deceptive patterns. In this case study, we observed a critical pattern of deceptive behavior in commercially available LLM-based care software integrated into robots. The LLM-equipped robot falsely claimed to have medication reminder functionalities, not only assuring users of its ability to manage medication schedules but also proactively suggesting this capability despite lacking it. This deceptive behavior poses significant risks in healthcare environments, where reliability is paramount. Our findings highlights the ethical and safety concerns surrounding the deployment of LLM-integrated robots in healthcare, emphasizing the need for oversight to prevent potentially harmful consequences for vulnerable populations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

AI in Service InteractionsEthics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen