Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI Awareness and Leveraging Inclusivity
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is a rapidly developing technology with the potential to have a major impact on society. However, there is growing concern that Generative AI could be used in ways that exclude or harm certain groups of people. This raises the question of how Generative AI awareness can be used to promote inclusivity. This paper presents the findings of an exploratory study on the impact of Generative AI awareness on inclusivity. The study involved semi-structured interviews with participants from a variety of backgrounds, including the general public, specific stakeholder groups, and marginalized groups. The findings of the study suggest that Generative AI awareness can have a positive impact on inclusivity. Participants who were more aware of AI were more likely to be supportive of policies and practices that promote inclusivity in the development and use of AI. They were also more likely to take steps to mitigate the potential risks of Generative AI for marginalized groups. However, the study also found that AI awareness may be related negatively with inclusivity. Participants who were more aware of the potential for AI to be biased were more likely to avoid using AI-powered products and services. This could lead to marginalized groups being excluded from the benefits of AI. The findings of this study have several of implications for policymakers, businesses, and other organizations that are developing and using AI. It is important to be aware of the potential for Generative AI to exacerbate existing inequalities and to take steps to promote inclusivity in the development and use of AI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.326 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.218 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.111 Zit.