Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human-Centered AI Perception Prediction in Construction: A Regularized Machine Learning Approach for Industry 5.0
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Industry 5.0 emphasizes human-centered integration of artificial intelligence in industrial contexts, yet successful adoption depends critically on workforce perception and acceptance. This research develops and validates a machine learning framework for predicting AI-related perceptions and expected impacts in the construction industry under small sample constraints typical of specialized industrial surveys. Specifically, the study aims to develop and empirically validate a predictive AI decision support model that estimates the expected impact of AI adoption in the construction sector based on digital competencies, ICT utilization, AI training and experience, and AI usage at both individual and organizational levels, operationalized through a composite AI Impact Index and two process-oriented outcomes (perceived task automation and perceived cost reduction). Using a dataset of 51 survey responses from Slovak construction professionals collected in 2025, we implement a methodologically rigorous approach specifically designed for limited-data regimes. The framework encompasses ordinal target simplification from five to three classes, dimensionality reduction through theoretically grounded composite indices reducing features from 15 to 7, exclusive deployment of low variance regularized models, and leave-one-out cross-validation for unbiased performance estimation. The optimal model (Lasso regression with recursive feature elimination) predicts cost reduction perception with R2 = 0.501, MAE = 0.551, and RMSE = 0.709, while six classification targets achieve weighted F1 = 0.681, representing statistically optimal performance given sample constraints and perception measurement variability. Comparative evaluation confirms regularized models outperform high variance alternatives: random forest (R2 = 0.412) and gradient boosting (R2 = 0.292) exhibit substantially lower generalization performance, empirically validating the bias-variance trade-off rationale. Key methodological contributions include explicit bias-variance optimization preventing overfitting, feature selection via RFE reducing input space to six predictors (personal AI usage, AI impact on budgeting, ICT utilization, AI training, company size, and age), and demonstration that principled statistical approaches achieve meaningful predictions without requiring large-scale datasets or complex architectures. The framework provides a replicable blueprint for perception and impact prediction in data-constrained Industry 5.0 contexts, enabling targeted interventions, including customized training programs, strategic communication prioritization, and resource allocation for change management initiatives aligned with predicted adoption patterns.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.