Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Straw Men, Deep Learning, and the Future of the Human Microscopist: Response to “Artificial Intelligence and the Pathologist: Future Frenemies?”
13
Zitationen
3
Autoren
2017
Jahr
Abstract
We are pleased the authors of “Artificial Intelligence and the Pathologist: Future Frenemies?” have taken an interest in our essay “AlphaGo, Deep Learning, and the Future of the Human Microscopist.” But we find ourselves in the odd position of being challenged to defend positions we never expressed in our essay.The main thesis of the authors' critique is that we “hypothesize that the development of intuition and creativity combined with raw computing of artificial intelligence (AI) heralds an age where well-designed and executed AI algorithms can solve complex medical problems, thereby replacing the microscopist.” If you read “Future Frenemies?” without first reading our essay, you'd guess we've written a eulogy for the human microscopist, and the funeral is tomorrow. Nothing could be further from the truth. In our essay we make no prediction that AI will be “replacing the microscopist” or will “take over pathology” anytime soon, as the authors imply. In fact, we stress the “many hurdles to replacing the human microscopist.”The authors state that “it would be more reasonable to see NGS (next-generation sequencing), digital pathology, whole slide imaging and AI as synergistic technologies to human cognition. We note that the question of human versus computer has now been refined as human versus human with the computer.” Nowhere in our essay do we imply a dichotomous future for pathology—“humans versus computers.” Quite the contrary, the authors are echoing us when we write “we predict that computers will be increasingly incorporated into the daily workflow when they can improve diagnostic accuracy. . .reduce the amount of time it takes for a pathologist to render a diagnosis. . . potentially enabling pathologists to focus more cognitive resources on higher-level diagnostic and consultative tasks.” But when the authors of “Future Frenemies?” suggest that AI will be relegated to “repetitive detailed tasks which require accuracy and speed,” tasks that humans find “mind-numbing and consequently error-prone,” we must part ways. We are no longer speculating about the future: recently, it was announced that self-driving cars will be deployed in Boston—an exercise that anyone who has ever driven in Boston can attest is neither mind-numbing nor mundane. Studies have clearly demonstrated that deep learning systems have the potential to function on par with humans, or even better, at some complex intuitive tasks, such as playing Go, and, as we discussed in our essay, classifying images into diagnostic categories. It's true, the future is impossible to predict, but we see a meaningful interpretive role for both deep learning and the human pathologist in the foreseeable future.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.