Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Media literacy and mechanisms for verifying AI-generated images
0
Zitationen
2
Autoren
2025
Jahr
Abstract
In today’s world, we are inundated with fake content, particularly with the proliferation of social media, which has contributed to the spread of various forms of deception. Among these, AI-generated images have become increasingly prevalent. Therefore, it has become imperative to activate the role of media literacy to counter the risks posed by AI-generated images. Media literacy enhances the audience’s ability to think critically when consuming media content, enabling them to analyze images more consciously and thus determine their authenticity. Media literacy also encourages individuals to take responsibility when sharing images and content online, thereby reducing the spread of misinformation. It empowers individuals to distinguish between genuine and fake content circulating on social media. Media literacy focuses on training the users to use tools and techniques to verify images in general and AI-generated images in particular. It raises awareness about the signs that may indicate an image is AI-generated, such as distortions in the image, illogical details… etc. The study primarily focuses on the risks associated with AIgenerated images, such as the spread of misinformation, the creation of ideal beauty standards, biases, and violations of intellectual property rights, among others. The study also emphasizes the mechanisms for verifying these images, employing various methods to determine the origin of the image, such as visual verification, search engine verification, and watermark tracking.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.374 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.261 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.126 Zit.