Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in Regulatory Reporting and Audit Readiness: A Human–AI Collaboration Framework for Compliance-Critical Systems
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Regulatory reporting systems increasingly adopt artificial intelligence due to the growing volume of data and the complexity of compliance rules. Artificial intelligence systems have opaque decision-making processes and present risks in compliance-critical domains. Black box models weaken auditability, regulatory trust, and human oversight. The Human-AI Collaboration Framework for Explainable AI in regulatory reporting systems shows that AI can help rather than supplant regulatory analysts and help them stay in the loop. Adding a layer for explainability and having humans involved in regulatory data processes can lead to more accurate reports, make preparing for audits easier, and help resolve regulatory problems more quickly. The framework concludes that the key requirements for audit-ready regulatory programs in the 21st century are explanation and collaboration for AI. Transparency is not the opposite of, but rather the complement to, automating regulation through AI. Human judgment remains at the core of compliance, while AI enables analyses. Additionally, demonstrating transparency and accountability through governance structures positions them to gain a competitive advantage, strengthen regulatory relationships, and reduce compliance risk.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.514 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.386 Zit.
Fairness through awareness
2012 · 3.269 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.