OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.04.2026, 04:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI Language Model Rivals Expert Ethicist in Perceived Moral Expertise

2025·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

4

Autoren

2025

Jahr

Abstract

People view AI as possessing expertise across various fields, but the perceived quality of AI-generated moral expertise remains uncertain. Recent work suggests that large language models (LLMs) perform well on tasks designed to assess moral alignment, reflecting moral judgments with relatively high accuracy. As LLMs are increasingly employed in decision-making roles, there is a growing expectation for them to offer not just aligned judgments but also demonstrate sound moral reasoning. Here, we advance work on the Moral Turing Test and find that Americans rate ethical advice from GPT-4o as slightly more moral, trustworthy, thoughtful, and correct than that of the popular New York Times advice column, The Ethicist. Participants perceived GPT models as surpassing both a representative sample of Americans and a renowned ethicist in delivering moral justifications and advice, suggesting that people may increasingly view LLM outputs as viable sources of moral expertise. This work suggests that people might see LLMs as valuable complements to human expertise in moral guidance and decision-making. It also underscores the importance of carefully programming ethical guidelines in LLMs, considering their potential to influence users’ moral reasoning.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIHate Speech and Cyberbullying Detection
Volltext beim Verlag öffnen