OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 04:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Fairness and Explainability of Artificial Intelligence Based Healthcare Applications

2026·0 Zitationen·Journal of Dental EducationOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The perspective article by Anand and Vaderhobli titled “The Ethical Boundaries of Chairside Generative Artificial Intelligence in Dental Education” is timely and provides a high-level overview of use of generative artificial intelligence by dental students, its potential applications and pitfalls [1]. It behooves us to take a cautiously optimistic approach to use artificial intelligence (AI) based tools and applications in dental education and oral health care delivery. Numerous diagnostic methods, medical devices, and health monitoring systems are built using AI algorithms. It has come to a point where a patient's demographic information and clinical records can be fed to an AI application to generate a personalized treatment plan. For example, in the field of orthodontics, we used to obtain clinical photographs, radiographs, and plaster models of a patient and then spend considerable amount of time evaluating them to determine a treatment plan. Presently, we can use AI to automatically analyze these clinical records and come up with a treatment plan in a few seconds [2]. A major advantage of AI is that it can process large volumes of data drawn from multiple sources in a short amount of time to recognize patterns that cannot be comprehended by humans, to answer clinical questions, and deliver a plan of care that is tailored to the needs of a particular individual. A major concern is that there is sparse information on the nature of data that is used to build and test AI models and Generative AI tools. Companies that develop AI-based medical devices do not typically release information on how their models are built and work. Several FDA-approved AI algorithms are insufficiently validated, and their use cannot be justifiable [3]. Without guardrails, accelerated innovation can backfire. Typically, AI models are built, trained, tested, and validated on clinical datasets that comprise healthcare records of a large number of patients. Traditionally disenfranchised populations, who tend to have the poorest health outcomes, face systematic barriers to accessing healthcare, and the uninsured or underinsured, and those of low socioeconomic status tend to be poorly represented in clinical datasets [4]. AI models that are built on homogenous datasets that are not representative of the general population perform poorly [5]. This has the potential to exacerbate biases, further worsen clinical outcomes, and accentuate the already existing asymmetry in healthcare access and quality. Yet another issue with AI-based healthcare applications is the lack of transparency on how the AI algorithms work. Large amounts of input data are processed by computers using algorithms that are modelled on neural networks, which are designed to mimic the human brain. It is not clear how the data is processed and how a particular output or decision is arrived. This is called the black box paradigm [6]. In essence, neither the AI developer, healthcare provider, nor the patient is aware of how a particular clinical decision was arrived at by the AI model. This is unethical because, as healthcare providers, we must provide all information to a patient to make a truly informed decision. While it is exciting to jump on the AI bandwagon, we have to be cognizant of the potential pitfalls and ethical concerns stemming from the indiscriminate use of AI in healthcare education. It is heartening to know that there have been recent efforts to implement safe and responsible AI in healthcare [7]. This endeavor is, ideally, a first step in the pathway to make AI fair and explainable, and hopefully, we can leverage the potential of Generative AI-based tools in dental education and oral healthcare delivery. The authors have nothing to report. The authors declare no conflicts of interest.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Machine Learning in Healthcare
Volltext beim Verlag öffnen