Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unveiling the black box: imperative for explainable AI in cardiovascular disease (CVD) prevention–author reply
0
Zitationen
3
Autoren
2024
Jahr
Abstract
In their commentary entitled “Unveiling the Black Box: Imperative for Explainable AI in Cardiovascular Disease (CVD) Prevention”, Wu et al. rightly emphasise the need for explainability and transparency of artificial intelligence (AI) systems.1Wu Y. Lin C. Unveiling the black box: imperative for explainable AI in cardiovascular disease prevention.Lancet Reg Health West Pac. 2024; 48: 101145Google Scholar We agree resonantly. These are essential for healthcare workforce buy-in, accountability to patients and delivery of equitable care. In our earlier article in this journal, we discussed the ways in which AI is being incorporated into a CVD prevention service in the National University Heart Centre Singapore (NUHCS).2Dalakoti M. Wong S. Lee W. et al.Incorporating AI into cardiovascular diseases prevention-insights from Singapore.Lancet Reg Health West Pac. 2024; 48 (Published 2024 May 27)101102https://doi.org/10.1016/j.lanwpc.2024.101102Summary Full Text Full Text PDF PubMed Scopus (2) Google Scholar van Royen et al. have earlier proposed key quality criteria for AI-based models for CVD management.3van Royen F.S. Asselbergs F.W. Alfonso F. Vardas P. van Smeden M. Five critical quality criteria for artificial intelligence-based prediction models.Eur Heart J. 2023; 44: 4831-4834https://doi.org/10.1093/eurheartj/ehad727Crossref PubMed Scopus (14) Google Scholar In the same regard, the AI tools used in our healthcare system aim for full transparency and accessibility of comprehensive clinical and social data as the foundation for AI modelling. We make aggregated healthcare information accessible at national level, and software and code is shared across healthcare institutions and aggregate population trends published regularly.2Dalakoti M. Wong S. Lee W. et al.Incorporating AI into cardiovascular diseases prevention-insights from Singapore.Lancet Reg Health West Pac. 2024; 48 (Published 2024 May 27)101102https://doi.org/10.1016/j.lanwpc.2024.101102Summary Full Text Full Text PDF PubMed Scopus (2) Google Scholar Our AI models focus on clearly defined clinical use-cases, with similarly clear outcomes and intent of usage, including the initiation of necessary medications to reduce cardiovascular risk based on risk factors at the recommended doses.2Dalakoti M. Wong S. Lee W. et al.Incorporating AI into cardiovascular diseases prevention-insights from Singapore.Lancet Reg Health West Pac. 2024; 48 (Published 2024 May 27)101102https://doi.org/10.1016/j.lanwpc.2024.101102Summary Full Text Full Text PDF PubMed Scopus (2) Google Scholar Any CVD prediction models developed for cardiovascular risk prediction undergo rigorous internal and external validation and benchmarking against existing risk scoring systems, trained on population-level large local datasets.4Lim C. Hilal S. Ma S. et al.Recalibrated Singapore-modified framingham risk score 2023 (SG-FRS-2023).https://bpb-us-w2.wpmucdn.com/blog.nus.edu.sg/dist/4/6173/files/2023/10/2023_Recalibrated_Singapore-Modified_Framingham_Risk_Score_SG-FRS-2023_report.pdfDate accessed: June 21, 2024Google Scholar We believe that AI systems built on such a framework empower discoveries and insights - being cautious not to reject findings that we cannot explain. Human players of GO have been recorded to make more novel unobserved moves and higher quality decisions after the introduction of the deep learning AI program, AlphaGo.5Shin M. Kim J. van Opheusden B. Griffiths T.L. Superhuman artificial intelligence can improve human decision-making by increasing novelty.Proc Natl Acad Sci USA. 2023; 120e2214840120https://doi.org/10.1073/pnas.2214840120Crossref Scopus (19) Google Scholar We aspire towards explaining novel suggestions and correlations of the AI. Explainable AI models must also remain embedded in a strong patient-physician relationship, allowing patients to better understand their health, while allowing physicians to focus on tailoring the guideline-directed clinical management towards the individual circumstances and beliefs of the people we treat. All authors disclose no financial and personal relationships with other people or organizations that could inappropriately influence their work. Incorporating AI into cardiovascular diseases prevention–insights from SingaporeImproved upstream primary prevention of cardiovascular disease (CVD) would enable more individuals to lead lives free of CVD. However, there remain limitations in the current provision of CVD primary prevention, where artificial intelligence (AI) may help to fill the gaps. Using the data informatics capabilities at the National University Health System (NUHS), Singapore, empowered by the Endeavour AI system, and combined large language model (LLM) tools, our team has created a real-time dashboard able to capture and showcase information on cardiovascular risk factors at both individual and geographical level- CardioSight. Full-Text PDF Open AccessUnveiling the black box: imperative for explainable AI in cardiovascular disease preventionDalakoti et al. have a positive attitude towards the application of artificial intelligence (AI) in cardiovascular disease (CVD) prevention in Singapore, especially emphasizing the potential of AI tools such as CardioSight and CHAMP in identifying high-risk individuals and implementing preventive measures.1 However, a core issue is missing from its argument: the explainability and transparency of these AI systems. Given the increasing penetration of AI technology into medical decision-making, it has become a top priority to uncover the mystery of the “black box” and ensure that the recommendations provided by these systems can be understood and trusted by both physicians and patients alike. Full-Text PDF Open Access
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.284 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.233 Zit.
"Why Should I Trust You?"
2016 · 14.179 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.096 Zit.