Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Insights into suggested Responsible AI (RAI) practices in real-world settings: a systematic literature review
10
Zitationen
4
Autoren
2025
Jahr
Abstract
AI-enabled systems have significant societal benefits, but only if they are developed, deployed, and used responsibly. We systematically review 45 empirical studies in real-world settings to identify suggested Responsible AI (RAI) practices to ensure that AI-enabled systems uphold stakeholders' legitimate interests and fundamental rights. Our findings highlight eleven areas of suggested RAI practices: harm prevention, accountability, fairness and equity, explainability, AI literacy, privacy and security, human-AI calibration, interdisciplinary stakeholder involvement, value creation, RAI governance, and AI deployment effects. Our findings also show that there are more discussions about how RAI is supposed to be practiced than existing RAI practices. Ad hoc implementation of RAI practices in real-world settings is concerning because almost 80% of the AI-enabled systems reported in the 45 included articles are applied in use cases that can be categorised as high-risk settings, and over half are reported in the deployment phase. Our findings also highlight the crucial role of stakeholders in ensuring RAI. Identifying stakeholders into user, non-user, and primary stakeholders can thus help understand the dynamics of the settings where AI-enabled systems are (to be) deployed and guide the implementation of RAI practices. In conclusion, although there is a consensus that RAI practices are a necessity, their implementation in real-world is still in its early day. The involvement of all relevant stakeholders is irreplaceable in driving and shaping RAI practices. There is a need for more comprehensive and inclusive RAI research to advance RAI practices in real-world settings.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.504 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.856 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.377 Zit.
Fairness through awareness
2012 · 3.267 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.