OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 17:45

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Beyond the hype: Navigating bias in AI-driven cancer detection

2024·7 Zitationen·OncotargetOpen Access
Volltext beim Verlag öffnen

7

Zitationen

6

Autoren

2024

Jahr

Abstract

In recent years, the integration of artificial intelligence (AI) into healthcare has been heralded as a revolutionary force, particularly in cancer detection.Headlines tout AI systems outperform human radiologists in identifying tumors, promising a future where cancer diagnoses are faster, more accurate, and universally accessible.However, as we stand on the cusp of this AI-driven medical revolution, it is crucial to look beyond the hype and address a significant challenge: bias in AI-driven cancer detection systems.There is a growing use of AI technology to identify cancer in early stages, from mammograms, CT scan images, or biopsy images.The applications of deep learning algorithms are expanding and new approaches have demonstrated remarkable capabilities in cancer screening, diagnosis, risk prediction, prognosis, treatment strategy, response assessment, and follow-up [1].These advancements have sparked hope for earlier cancer detection, improved treatment decisions and planning, and reduced morbidity and mortality.As we eagerly adopt Al models, we need to take a moment to think about the potential biases that they may contain.It is important to remark that these models are not the solution for every cancer issue, and they may have inherent limitations.The accuracy of an AI-based model relies on the data on which the model has been trained.This means that if the initial datasets are not representative of the population where it will be used, it would ultimately impact the performance of the test and affect generalizability.For example, if an AI model is trained on Caucasian patients, it may struggle to detect skin cancer in patients with darker skin accurately, leading to missed diagnoses or false positives [2].Further, the impact of a population's culture (e.g., genetics, diets, traditions, access to healthcare) can result in various presentations and incidence rates of a specific disease, which may be difficult to predict if an AI model does not have adequate training data.Most algorithms learn from historical datasets without existing disparities in healthcare, and if these datasets are not diverse and representative of all populations, the resulting AI systems may perform poorly for underrepresented groups [3].As discussed, the bias in AI is not limited to racial disparities.Multiple factors such as socioeconomic status, gender, age, internet access, and geographic location can influence the quality and availability of medical data and the performance of AI systems.An AI system trained on data from well-funded YS wrote the first draft; HP, DV, DS, EQ, and QH performed critical review.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

AI in cancer detectionArtificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen