Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unraveling generative AI in BBC News: application, impact, literacy and governance
13
Zitationen
2
Autoren
2024
Jahr
Abstract
Purpose This study aims to uncover the ongoing discourse on generative artificial intelligence (AI), literacy and governance while providing nuanced perspectives on stakeholder involvement and recommendations for the effective regulation and utilization of generative AI technologies. Design/methodology/approach This study chooses generative AI-related online news coverage on BBC News as the case study. Oriented by a case study methodology, this study conducts a qualitative content analysis on 78 news articles related to generative AI. Findings By analyzing 78 news articles, generative AI is found to be portrayed in the news in the following ways: Generative AI is primarily used in generating texts, images, audio and videos. Generative AI can have both positive and negative impacts on people’s everyday lives. People’s generative AI literacy includes understanding, using and evaluating generative AI and combating generative AI harms. Various stakeholders, encompassing government authorities, industry, organizations/institutions, academia and affected individuals/users, engage in the practice of AI governance concerning generative AI. Originality/value Based on the findings, this study constructs a framework of competencies and considerations constituting generative AI literacy. Furthermore, this study underscores the role played by government authorities as coordinators who conduct co-governance with other stakeholders regarding generative AI literacy and who possess the legislative authority to offer robust legal safeguards to protect against harm.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.