Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Decision making methodology based on generalized confidence and interpretability of artificial intelligence recommendation
0
Zitationen
2
Autoren
2023
Jahr
Abstract
The article examines the transition in medical diagnostics from traditional clinician-dependent methodologies to evidence-based approaches using artificial intelligence (AI). The primary objective of the research is to develop a decision-making methodology based on the integration of human decisions and AI-based recommendations, as well as the interpretability of AI results for humans. The proposed methodology involves the formation of decisions based on human intelligence (HI) and AI, the assessment of the utility of recommendations, and generation of a joint decision based on cumulative probability. The practical application of the methodology was demonstrated through an experiment involving the classification of non-medical images. The research findings underscore the importance of transparency, interpretability, and trust in AI results for the successful utilization of AI in healthcare. Figs.: 1. Refs.: 16 titles.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.