Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in the public debate: risk amplifiers and mitigators in media discourse
0
Zitationen
4
Autoren
2026
Jahr
Abstract
This study examines the discourse surrounding the risks of artificial intelligence (AI) in Spanish digital media during the first year after the launch of ChatGPT in November 2022. Adopting a qualitative approach rooted in discourse analysis and communication studies, it identifies types of risks and key voices shaping the public debate. The analysis is based on 2705 journalistic texts collected from six Spanish newspapers between December 2022 and November 2023. A total of 878 statements regarding specific risks were identified and used to create a taxonomy consisting of seven risk categories and six distinct groups of voices. The findings indicate that the most widely covered risks in the media are ‘risks to civilisation and humanity’ and ‘risks to individuals’. Meanwhile, the three most prominent groups of voices in the debate are journalists and media outlets, government officials and regulators, and representatives from the business sector. A year-long analysis of the evolution of risk discourse reveals changes in how AI-related risks are portrayed and shifts in the social actors participating in the public media debate. The study also highlights the perception that some representatives from tech companies may be promoting AI-related risks for self-serving purposes. These strategies appear aimed at emphasising long-term, existential risks in order to divert attention from the immediate, tangible risks already present, impeding further regulation that could curb the growth of AI. Furthermore, by portraying AI as an abstract, uncontrollable force, they dilute the sense of human responsibility for its development and regulation. At the same time, other voices are emerging in the public debate that downplay these risks and seek to discredit those warning of their potential consequences.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.536 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.392 Zit.
Fairness through awareness
2012 · 3.270 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.