Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
HumAIne Benchmarking Suite: Evaluation of Human–AI Collaboration
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This presentation introduces the HumAIne Benchmarking Suite, a comprehensive framework designed to evaluate the quality of Human-AI Collaboration (HAIC). Moving beyond traditional model-centric metrics like accuracy and latency, the suite provides a structured methodology to assess the actual collaboration process, measuring indicators such as system-level efficiency, user trust, cognitive load, and interaction frequency. The slides detail the architecture of the platform, the standardized logging schema used across diverse domains (including healthcare and smart cities), and its specialized modules for fairness evaluation, simulation, and system usabilit
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.