OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.04.2026, 19:45

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

NIH AI Assurance Lab Pilot

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2026

Jahr

Abstract

NIH AI Assurance Lab Pilot BackgroundInvestigators recognize AI as a powerful tool for enabling deeper insights and improving research efficiency, yet its adoption remains challenging. The AI assurance exploratory pilots, conducted through real‑world NIH use cases, underscored how resource‑intensive development, rapidly evolving technology, and inconsistent alignment with ethical, technical, and assurance standards hinder scalable deployment. In the absence of clear, standardized methods, playbooks, and best practices, investigators often rely on custom, in‑house tools and fragmented workflows, spending significant time on costly, dense, duplicative, or poorly tailored resources that slow progress and limit the adoption, safety, and impact of AI in biomedical and health research. By creating a shared foundation of AI assurance resources, an NIH collaborative AI Assurance Lab would enable researchers to focus on developing novel applications. Insights from PilotsThe pilots focused on evaluating barriers and gathering insights from the biomedical and health research community to accelerate responsible AI development and adoption – priorities that are underscored in the NIH Strategic Plan for Data Science (2025) and America’s AI Action Plan (2025). The pilots uncovered gaps in the AI assurance resources available to the NIH research community (e.g., curated playbooks, formalized benchmarks, testing and evaluation methods, and standardized tools) that are slowing advancements in the field. Challenges for the biomedical and health research community include: Absence of clear, standardized guidance and best practices for AI development and deployment. Inconsistent processes for aligning AI workflows with established ethical, technical, and assurance standards. Reliance on inefficient, custom-built tools and processes due to a lack of standardized and accessible AI resources. Unscalable and resource-intensive efforts required to develop and maintain AI systems throughout the research lifecycle. To address these challenges, NIH in partnership with MITRE recommends establishing a collaborative AI Assurance Lab as a trusted resource for the biomedical and health research community. An NIH AI Assurance Lab would leverage collaborative research engagements using real‑world AI use cases to generate tailored lessons learned, curated playbooks, benchmarks, testing and evaluation methods, and other assurance resources that directly support responsible AI‑enabled research. Operating through an iterative framework centered on collaboration, continuous improvement, and validation with real‑world evidence, the Lab would identify emerging assurance gaps, develop solutions, and adapt them to practical biomedical and health research settings (see graphic below). By fostering interdisciplinary partnerships, streamlining AI workflows, and setting new benchmarks for ethical and effective AI, the Lab would accelerate AI adoption across NIH initiatives, ultimately advancing scientific discovery, enabling precision medicine, and improving public health for the benefit of both the scientific community and society at large. Partnership To assess the current state of AI assurance in research and identify solutions to key challenges, NIH partnered with MITRE, operator of the Health Federally Funded Research and Development Center (Health FFRDC). Results from the initial pilot period of this effort, including landscape analysis of existing AI assurance resources and initiatives, overview of real-world pilots, and insights for practical solutions, were gathered into a report.

Ähnliche Arbeiten