Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From MYCIN to MedGemma: A Historical and Comparative Analysis of Healthcare AI Evolution
6
Zitationen
2
Autoren
2025
Jahr
Abstract
The evolution of artificial intelligence (AI) in healthcare has transitioned through distinct technological eras, each marked by unique advancements and challenges. This article provides a comprehensive histor-ical and comparative analysis of healthcare AI assistants, from early rule-based systems like MYCIN in the 1970s–1980s to contemporary large language models (LLMs) such as Med-PaLM and MedGemma, and explores emerging adaptive AI frameworks. Rule-based systems offered transparency and interpretability but were limited by brittleness and scalability. The machine learning (ML) era introduced data-driven approaches, improving predictive analytics but raising concerns about bias and explainability. The 2020s saw the rise of LLMs, enabling conversational AI for clinical triage and patient education, though halluci-nations and safety risks emerged. Future adaptive AI systems promise real-time personalization and con-tinual learning but lack empirical validation. The study synthesizes technical architectures, functional applications, and evaluation metrics across eras, highlighting gaps in cross-era benchmarking and inte-grated governance. Ethical and regulatory challenges have also evolved, from liability concerns in rule-based systems to bias and fairness in ML, and now to safety and alignment in LLMs. Despite progress, fragmentation persists in the literature, with limited comparative analyses and a focus on provider-facing tools over patient-oriented applications. This review underscores the need for unified frameworks to evaluate performance, ensure ethical compliance, and guide the development of next-generation AI in healthcare. By addressing these gaps, the field can better harness AI’s potential to transform clinical prac-tice while mitigating risks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.