OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 22:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

From ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Users

2023·35 ZitationenOpen Access
Volltext beim Verlag öffnen

35

Zitationen

6

Autoren

2023

Jahr

Abstract

Large language models (LLMs) like ChatGPT recently gained interest across all walks of life with their human-like quality in textual responses. Despite their success in research, healthcare, or education, LLMs frequently include incorrect information, called hallucinations, in their responses. These hallucinations could influence users to trust fake news or change their general beliefs. Therefore, we investigate mitigation strategies desired by users to enable identification of LLM hallucinations. To achieve this goal, we conduct a participatory design study where everyday users design interface features which are then assessed for their feasibility by machine learning (ML) experts. We find that many of the desired features are well-perceived by ML experts but are also considered as difficult to implement. Finally, we provide a list of desired features that should serve as a basis for mitigating the effect of LLM hallucinations on users.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Misinformation and Its ImpactsTopic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen