Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mapping medical specialty vulnerability to superintelligent AI: A competency-guided generative AI foresight framework
1
Zitationen
7
Autoren
2025
Jahr
Abstract
The evolution of artificial intelligence (AI) raises questions about the future roles of physicians. This study aimed to propose an exploratory foresight model for stratifying risk across medical specialties, using board-defined competencies and generative AI (genAI) evaluation as the assessment tool. We developed a heuristic framework, the Machine automat-ability, Diagnostic Ambiguity, Legal/ethical complexity, Interpersonal intensity, Knowledge codifiability, Evidence in data, Difficulty of procedures (MALIKED) score, to capture dimensions of displacement vulnerability for 27 board-recognized specialties. To minimize individual bias, ratings were generated by three genAI models (ChatGPT, DeepSeek, and Gemini). Data-centric fields—Clinical Pathology (30.3/35), Anatomic/Clinical Pathology (29.3/35), and both Anatomic Pathology and Radiology (28.0/35 each)—clustered in the highest-vulnerability tier. In contrast, procedurally intensive or patient-interaction-heavy specialties—including Psychiatry (11.0/35), Neurosurgery (11.7/35), Obstetrics/Gynecology (13.0/35), General Surgery (13.0/35), Pediatrics (14.3/35), Emergency Medicine (14.3/35), and Family Medicine (14.3/35)—formed the lowest-vulnerability tier. Between these extremes, mixed-mode specialties, such as Internal Medicine (17.0/35) and Neurology (17.0/35), along with Ophthalmology (19.3/35) and Anesthesiology (21.3/35), occupied an intermediate zone. Displacement risk was driven by knowledge codifiability and data-centricity, while procedural complexity and interpersonal interaction intensity exerted protective effects. This exploratory foresight framework suggests that the risk of displacement by advanced or potentially superintelligent AI is unevenly distributed across medical specialties. While data-driven fields appear most exposed, no specialty is categorically insulated, as multimodal AI and robotics continue to evolve. The MALIKED framework is not predictive but intended as a structured lens for debate, education, and workforce planning regarding the long-term implications of AI in medicine.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.