Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The AIR Framework for Research Transparency: A Critical Analysis of Stage-Specific AI Disclosure in the Context of Accessibility and Research Integrity (Preprint)
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The rapid adoption of generative AI in research has created urgent need for transparent disclosure practices. This article presents critical scholarly analysis of the AIR (AI in Research) framework, a stage-specific transparency standard developed by Electv Training (2026) that categorizes AI use across seven research phases and five engagement bands. Drawing on virtue epistemology and research integrity literature, I examine AIR’s theoretical foundations, arguing that transparency functions as constitutive epistemic virtue rather than procedural requirement. Through inter-rater reliability pilot study (n=15 raters, 9 research scenarios, Cohen’s κ=0.72), I demonstrate that AIR enables consistent classification across independent evaluators. However, critical analysis reveals significant limitations: potential for false precision in inherently ambiguous practices, inadequate treatment of accessibility-related AI use, risk of stigmatizing legitimate applications, and vulnerability to adversarial compliance. I present three failure-mode scenarios demonstrating classification disagreements, institutional misapplication, and disclosure-related stigma that AIR’s design does not adequately address. Comparison with competing frameworks shows AIR fills genuine gap in stage-specific vocabulary but requires refinement. As researcher working on AI accommodations for neurodivergent users, I propose AIR extensions explicitly addressing accessibility uses (new A1-Access sub-band) and protecting disclosure of disability-related AI applications. While AIR represents valuable contribution to research transparency infrastructure, uncritical adoption risks creating new forms of exclusion.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.