OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 14:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models

2025·4 Zitationen·ACM Computing Surveys
Volltext beim Verlag öffnen

4

Zitationen

6

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) are advancing rapidly and promising transformation across fields but pose challenges in oversight, ethics, and user trust. This review addresses trust issues like unintentional harms, opacity, vulnerability, misalignment with values, and environmental impact, all of which affect trust. Factors undermining trust include societal biases, opaque processes, misuse potential, and technology evolution challenges, especially in finance, healthcare, education, and policy. Recommended solutions include ethical oversight, industry accountability, regulation, and public involvement to reshape AI norms and incorporate ethics into development. A framework assesses trust in LLMs, analyzing trust dynamics and providing guidelines for responsible AI development. The review highlights limitations in building trustworthy AI, aiming to create a transparent and accountable ecosystem that maximizes benefits and minimizes risks, offering guidance for researchers, policymakers, and industry in fostering trust and ensuring responsible use of LLMs. We validate our frameworks through comprehensive experimental assessment across seven contemporary models, demonstrating substantial improvements in trustworthiness characteristics and identifying important disagreements with existing literature. Both theoretical foundations and empirical validation are provided in comprehensive supplementary materials.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen