Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards Responsibility Evaluation of Generative Language Models
0
Zitationen
3
Autoren
2026
Jahr
Abstract
An evaluation of the responsibility of generative AI models presents unique challenges that require holistic and practical solutions. This paper introduces an enhanced version of the VERIFAI framework, which extends beyond classification models to assess generative language models as well in terms of ethics, explainability, privacy, and security. Unlike existing theoretical frameworks, VERIFAI provides an integrated, software-driven approach that automates evaluations, ensures reproducibility, and offers actionable insights. To demonstrate its capabilities, we conduct an evaluation of the generative language model Llama-3.2-1B using the Regard metric, which quantifies bias in text generation. Our findings highlight systematic biases in model outputs, reinforcing the need for structured Responsible AI assessments. This work underscores VERIFAI’s scalability, intuitive UI, and advanced analysis capabilities, positioning it as a practical tool for the responsible evaluation of AI models.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.