Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bridging AI and Regulation: Large Language Models for Documentation Compliance Check
0
Zitationen
6
Autoren
2025
Jahr
Abstract
As Artificial Intelligence (AI) permeates our lives more rapidly, robust regulations to ensure trustworthy AI applications are demanded by AI operators and consumers. The European Union’s AI Act addresses this by establishing a regulatory framework for high-risk AI systems, emphasizing the need for proper documentation to ensure compliance. This paper presents a novel approach to assess AI documentation using Large Language Model based methods: a GPT-4 prompting approach and fine-tuning DeBERTa and Mistral-7B. Due to the lack of relevant datasets, we constructed a novel benchmark dataset comprising text passages from AI research publications. These passages are matched by AI experts with regulatory requirements and are classified into different fulfilment classes. Using this dataset in our comparative study, our findings demonstrate that fine-tuning the DeBERTa model achieves 92% ± 1% accuracy in classifying compliance categories, outperforming the more complex GPT-4 and Mistral-7B significantly. Overall, this research advances AI governance by providing insights into automating documentation compliance checks. Finally, by making the models and datasets publicly available, we promote further research into enhancing transparency and accountability in AI systems.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.372 Zit.
Fairness through awareness
2012 · 3.265 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.