OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 09:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Trust-Centered AI and Security Modeling Approach for Early Cancer Diagnosis, Population-Level Health Analysis, and Secure Deployment in U.S. Healthcare Infrastructure

2020·0 Zitationen·American Journal of Advanced Technology and Engineering SolutionsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2020

Jahr

Abstract

This study addresses the persistent gap between high-performing cancer AI prototypes and real-world adoption by proposing and testing a trust-centered AI plus security-modeling blueprint for early cancer diagnosis and population-level health analysis within U.S. healthcare infrastructure. The purpose was to quantify how often published “enterprise-ready” capabilities actually co-occur with deploy ability pillars such as interpretability, robustness, security, and equity. Using a quantitative, cross-sectional, case-based review design, each included paper was treated as a case reflecting cloud and enterprise healthcare deployment contexts (for example, multi-site systems, integrated EHR and imaging stacks, or networked inference services). The sample comprised 45 cases (N = 45). Key variables were five Likert-scored readiness dimensions (1–5): clinical validation rigor, interpretability and communication support, robustness and generalization evidence, security and privacy modeling, and fairness and equity evidence, plus composite indicators such as trust-mechanism presence and a rubric-scaled Trust-Centered Deployment Readiness (TDR). The analysis plan applied descriptive statistics (counts, percentages, means), cross-tabs between trust-mechanism grouping and validation readiness, and a composite readiness summary. Headline findings show that radiology and pathology cases dominated (31/45, 68.9%), interpretability appeared in 28/45 (62.2%) but comprehensive interpretability was limited (12/45, 26.7%; mean M = 3.1/5), external validation or multi-site evaluation occurred in only 16/45 (35.6%), and explicit security or privacy-by-design elements were present in 14/45 (31.1%) with the lowest readiness mean (M = 2.4/5). Trust-mechanism studies (19/45, 42.2%) showed higher validation readiness (M = 3.6/5) and more external validation (11/19, 57.9%) than performance-only studies (5/26, 19.2%). Overall, only 8/45 (17.8%) met high composite readiness (≥0.75), while 21/45 (46.7%) were moderate and 16/45 (35.6%) low. Implications indicate that healthcare AI procurement and governance should prioritize a complete evidence package that couples external validation, calibrated trust cues, security controls across the lifecycle, and subgroup equity reporting, rather than selecting models based on accuracy alone.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationElectronic Health Records SystemsExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen