OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.04.2026, 01:58

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Integrating Explainable Artificial Intelligence into Malaysia’s Medical Device Regulatory Framework: A Preliminary Analysis

2025·0 Zitationen·International Journal of Research and Innovation in Social Science
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Today Artificial Intelligence (AI) is transforming the landscape of healthcare by automating mundane processes, enhancing efficiency, refining diagnoses, expediting the development of more effective medicines, and so much more. However, a review of literature on AI in healthcare signals recurring concerns about data quality, from collection and analysis to interpretation and deployment, along with their ethical implications. Data practices, alongside challenges regarding traditional patient–doctor relationships, privacy, autonomy, and institutional trust, raised serious concerns over accountability gaps. In this context, traditional product liability laws and the professional liability framework remain the baseline for assigning liability in AI-related harms but prove to be ill-equipped in addressing the black-box nature of AI. The inability to provide reasons behind AI outputs complicates the legal determination of causation, fault, and the evidentiary standard. Without clear mechanisms to verify decisions or assign responsibility, these concerns undermine public confidence in AI technologies and healthcare governance. Therefore, this research aims to investigate the feasibility of translating the principles of Explainable AI (XAI) into a legally operative framework within Malaysia’s medical device regulatory regime. XAI is a set of principles and techniques developed to facilitate the interpretation of AI-generated outputs for human users. In healthcare, XAI is particularly significant as it enhances transparency, informed consent, duty of care, and accountability by interpreting AI reasoning, allowing patients to weigh risks and options, and enabling more informed clinical decision-making. This research adopts a doctrinal research approach, synthesizing statutory provisions, regulatory documents, and scholarly literature to analyse the integration of explainability requirements within the Malaysian framework. International best practices found in the European Union Artificial Intelligence Act and the International Medical Device Regulators Forum (IMDRF) Software as a Medical Device guideline are referred to reinforce the analysis of this research and eventually devise a contextually relevant framework for Malaysia. The research findings indicate the absence of explicit explainability requirements under Malaysia’s existing medical device regulations, notwithstanding its solid foundation for ensuring AI safety and performance. Nevertheless, strengthening requirements for technical documentation, post-market surveillance, human oversight, and transparency obligations supports the integration of XAI principles into Malaysia’s regulatory structure. Evaluating these explainability requirements not only strengthens accountability but also promotes trust in AI-enabled healthcare.

Ähnliche Arbeiten