Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Commentary: Is human supervision needed for artificial intelligence?
4
Zitationen
2
Autoren
2022
Jahr
Abstract
The role of artificial intelligence (AI) and machine learning (ML) in ophthalmology is well documented, with several studies on its role in diagnosing, treating, and prognosticating various eye diseases.[1] The rise of machines and AI is inevitable, and we must all be prepared for it. For every technological advancement, humans have always found ways to use their power for good and evil. The same is true for the fast-growing technologies of AI and ML as well. The authors of the accompanying article[2] developed a novel AI algorithm for detecting glaucoma with human in the loop (HITL) for annotation to supervise the learning of the algorithm. This is unlike several other ML studies that tried to identify glaucoma from fundus images by using deep learning techniques[3] that do not use HITL. The black box problem There is much confusion about the black box problem of AI.[4] Many AI algorithms are not explainable, even by the programmers who created them, as the code evolves over several virtual generations and ends up as a complex code whose working is opaque to us humans. We are unable to see the “rough work,” only the final answer. Thus, especially in the critical field of healthcare, there is a big doubt whether we can trust AI.[5] Explainable artificial intelligence (XAI) XAI is a set of processes and methods that allows humans to understand and trust the results and output created by ML algorithms. It describes the AI model, its expected impact, and potential biases. Especially in healthcare, AI-powered decision-making can be trusted only with open information about accuracy, fairness, transparency, and outcomes of the ML algorithms.[6] As the complexity of ML increases, there is a trade-off between its accuracy and its ability to generate explainable and interpretable conclusions. There are now several approaches to avoid the black box problem and try to develop an XAI. One is to use integrated gradients explanation to display a heatmap over the image being interpreted.[7] This can be easily understood by a human and often helps to pick up details that may have been missed. Interpretability and explainability Doshi-Velez and Kim defined interpretability as “the ability to explain or to present in understandable terms to a human.”[8] Another researcher named Miller defined interpretability as “the degree to which a human can understand the cause of a decision.”[9] Thus, interpretability relates to the ease of understanding the intuition behind the output of the ML algorithm. Meanwhile, explainability relates to the internal logic and mechanics of the ML model. Human in the loop (HITL) Fully automatic deep learning is what many researchers attempt to develop and is convenient. However, the unique challenges of medical image interpretation mean that human-in-the-loop (HITL)[10] ML may be a better option for safer, accurate results and to prevent gross mistakes. A human expert in the subject marking annotations and giving feedback for reinforcement learning would make the algorithm much better. Future of AI and ML There is no doubt that AI and ML are here to stay and will embed into multiple facets of modern life. Healthcare is one of the areas that will be greatly affected by AL and ML. The fourth industrial revolution (4IR) has brought rapid developments in technology accessible to all.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.