Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in healthcare – Crossing the chasm
0
Zitationen
1
Autoren
2023
Jahr
Abstract
Crossing the chasm is a seminal book on technology marketing written by Geoffery Moore, which talks about how disruptive technology products and services are sold, and thereby adopted in more traditional industries. As a physician with an MBA building a technology start-up in the artificial intelligence (AI) space, I think about crossing the chasm of AI adoption in healthcare a lot. The idea is simple, how does one make technology which is so useful that all users (or at least the great majority of users) adopt it and gain the benefits of it. What this means is that the onus of adoption is not on the users but on the builders and sellers of the technology itself. More on this later. Coming to the topic at hand – AI. While many definitions of AI exist, ranging from machines getting the ability to do tasks that humans can do, to machines developing the ability to learn tasks themselves, the one that I am personally most drawn to is called Tesler’s Theorem – AI is whatever machines cannot do yet. Do you know that today, if you are alive, there is absolutely no doubt that you are using AI, or that AI is impacting your life? Anytime you search anything on Google Search, or use Google Maps, or order food on Zomato, you are using AI. Every time you type using your phone keyboard, draft an email on Google or even see the news – there is an AI engine at the back end. In fact, today, most magnetic resonance imaging and computed tomography scanners out there are using AI to create the beautiful images that we are used to seeing! Do you even think of any of this as AI? Not really – why? because AI is a moving target. Imagine, back in 1954, when the term AI was coined by John McCarthy, if someone would have turned up with a ‘simple’ accounting calculator, it would have been the epitome of human intelligence being put into a machine! Today, we do not consider a calculator as a smart device, let alone AI. AI, again, is a moving target. Today, there are two main types of systems that are referred to as AI. The first is deep learning systems, where some input and matching output data are given to the computer, and the computer starts predicting the outcomes by looking at the inputs. For example, if you give a system enough chest X-rays containing pleural effusion, the system will itself learn what pleural effusion looks like and you would have an AI that can now diagnose pleural effusion on chest X-rays. The second is large language models (LLMs) which try to encapsulate all the knowledge/text that is available on the Internet, and allows the user to query the data using ‘prompts’. These are the ChatGPT/BARD kind of applications, also called generative AI. Given this knowledge, let us now build a framework for AI in healthcare. Again, simply because the number of possibilities is near-infinite (think the entire index of Harrison’s!), it is better to take a framework-based approach to think about AI in healthcare, as opposed to going into specific examples. Note that for the purposes of the ensuing discussion, we will stick to ‘clinical’ AI and not ‘administrative’ AI. Clinical AI is anything that affects clinical outcomes, while administrative AI deals with billing, scheduling, reimbursements, inventory management, and other facets of care delivery. Essentially, all healthcare can be broken down into three core components: Preventive and population health – When the patient is not ill per se, but is a productive member of society Diagnostics – When the patient becomes a patient, presenting with some symptoms. They are then subject to some tests to determine what is the cause of the illness Therapeutics – Once a diagnosis is established, the patient needs to be treated. Applications within each of the above three can be built using either deep learning systems or LLMs. The most proliferation of AI today is in the realm of diagnostics primarily because of the availability of high-quality digital datasets in radiology. Even from a regulatory perspective, there are more than 700 US-Food and Drug Administration-approved radiology AI products today. In fact, the WHO has approved the use of AI on chest X-rays for screening of tuberculosis in the absence of radiologists, democratising access to care. In addition, a chest X-ray AI auto-reporting tool from a company called Oxipit from Lithuania received European Medical Device Regulations (MDR) approval for autonomous ‘normal’ chest X-ray reporting. Suffice to say that the practice is radiology is slowly but surely getting disrupted by AI, and will lead to exponential growth in the sector. No article on AI is complete without the discussion of the replacement of physicians by machines. Doctors lie at the tip of the cognitive pyramid and are amongst the most intellectually capable people in the world. If the work that a doctor does is going to be done by an AI system, imagine all the other tasks that AI would be doing at that time? All the other ‘jobs’ that would be getting done by AI? That said, think of all the narrow tasks that AI can do, that a clinician would much rather not do? Do we really need clinicians to handle cough, cold, fever or other basic ailments? Do we really need radiologists to report normal X-rays, normal mammography scans? What if one doctor in an outpatient department could treat 100 patients per hour, as opposed to the current 10 patients? What if one could democratise access to healthcare in a way that every patient gets access to the best quality doctors, limited not by cost or geography? That is the power of AI. So, I would urge you to think of how you can use this technology in your practices, and not whether you should use it. It is only a matter of time when AI for healthcare crosses the chasm, so we should all be ready to serve our patients with this technology!
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.