Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Causal Automated Machine Learning for Zero-Shot Decision-Making in Low-Resource Environments: A New Paradigm in Machine Learning Automation and Transferability
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Automated machine learning (AutoML) has reduced the burden of manual model development, yet existing systems remain predominantly black-box predictors lacking causal reasoning. These limitations become particularly problematic in resource-constrained settings such as rural healthcare facilities, minority language processing, and emergency response operations. To address these challenges, we introduce a novel causal AutoML framework that integrates zero-shot learning for robust decision-making under minimal supervision. Our approach fundamentally shifts from prediction-focused black boxes to interpretable, causal-aware systems. We achieve this by embedding structural causal models directly into the AutoML pipeline, ensuring that model selection and optimization are guided by causal principles, not just statistical correlations. A key innovation is our causal-aware transfer engine, which uses graph-based contrastive learning to identify and transfer deep causal relationships rather than superficial feature similarities. This overcomes a critical failure point in traditional domain adaptation methods. We tested the framework using established benchmarks: Medical Information Mart for Intensive Care-III for healthcare, Infant Health and Development Program (IHDP) for policy evaluation, and Task-Aware Representation of Sentences (TARS) for natural language tasks. When tested on established benchmarks, our framework demonstrated significant performance gains in data-scarce conditions. On the IHDP policy evaluation benchmark, it achieved a precision in estimation of heterogeneous effects score of 1.6, and on the zero-shot TARS natural language processing benchmark, it outperformed Generative Pre-trained Transformer 3 by 2.7%. These statistically significant improvements (p < 0.05) highlight a practical path toward developing more reliable, interpretable, and ethically aligned AI systems for high-stakes applications where data is scarce and transparency is paramount.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.