Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Machine conviction
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Large language models (LLM) are not just writing poems and answering trivia anymore. They are becoming master persuaders, capable of crafting arguments that can change our minds on everything from political views to health choices. This article explores the rapidly evolving science of artificial intelligence (AI) persuasion, revealing how these digital influencers work, how effective they are compared to humans, and the profound ethical and societal implications of this new technology. We delve into the strategies LLMs use, the challenges of detecting AI-generated influence, and the urgent need for regulations to ensure this powerful technology is used responsibly.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.