OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 09:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Regulatory Frameworks and Ethical Governance of Neural Networks in Computer Science Applications

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

The growing use of neural networks in vital industries including healthcare, banking, and government has increased worries about transparency, accountability, and adherence to international laws. There are a number of Responsible AI methods, but they frequently don't have a cohesive system in place to match technological design with moral and legal requirements at every stage of the model lifespan. In this research, we present the Regulation-Aware Neural Network Lifecycle (RANNL), a new five-stage paradigm that directly integrates explainability, bias mitigation, and real-time compliance tracking into neural network development. The Compliance Scoring System (CS) in RANNL, in contrast to previous methods, statistically assesses models according to their interpretability, fairness, and regulatory preparedness. The EU AI Act, UNESCO AI Ethics Recommendations, and OECD AI Principles are just a few of the international policies that are linked to each step of the lifespan by a dynamic Regulation Mapping Matrix.Using deep learning architectures and regulatory toolkits like AI Fairness 360, Captum, and SHAP, we apply RANNL to a real-world healthcare prediction problem in order to validate the scheme. Model openness, fairness metrics, and compliance scores are all improved without sacrificing predictive performance in comparison to traditional ML pipelines. These findings demonstrate how well RANNL may bridge the gap between global AI governance and neural network engineering. This paper offers an interdisciplinary, quantifiable, and scalable method for creating reliable AI systems by fusing technical and policy viewpoints.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen