Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The State of Artificial Intelligence in Precision Oncology: An Interview with Eric Topol
1
Zitationen
2
Autoren
2024
Jahr
Abstract
AI in Precision OncologyAhead of Print Free AccessThe State of Artificial Intelligence in Precision Oncology: An Interview with Eric TopolEric J. Topol and Douglas FloraEric J. TopolScripps Research Translational Institute, La Jolla, California, USA.Search for more papers by this author and Douglas FloraSt. Elizabeth Healthcare, Edgewood, Kentucky, USA.Editor-in-Chief, AI in Precision Oncology.Search for more papers by this authorPublished Online:25 Jan 2024https://doi.org/10.1089/aipo.2024.29004.intAboutSectionsPDF/EPUB Permissions & CitationsDownload CitationsTrack CitationsAdd to favorites Back To Publication ShareShare onFacebookXLinked InRedditEmail Eric J. Topol, MDDouglas Flora, MDIntroductionEric Topol, MD, is a world-renowned cardiologist, best-selling author of several books on personalized medicine, and the founder and director of the Scripps Research Translational Institute in La Jolla, California. But he has also been the tip of the spear for the past 10 years or more as an advocate for using digital technologies, artificial intelligence (AI), and health care. Those arguments were introduced in his 2019 book Deep Medicine: How Artificial Intelligence can make Healthcare Human Again.1In December 2023, AI in Precision Oncology Chief Editor Doug Flora, MD, sat down with Topol in the opening keynote session of the journal's inaugural virtual summit, “The State of AI in Precision Oncology,” which broadcast on December 13, 2023, and is available to view on demand.This is a lightly edited and abbreviated transcript of that conversation.Eric, how did you get into AI and digital health?Eric Topol: It started in college. I was at the University of Virginia and really into genetics. I even wrote a thesis about prospects for gene therapy in humans—that was about 40 years ago! I got back to genomics when that became possible. In the 1990s, we started accruing huge data sets, and then digital biology became a possibility with sensors and smartphones connected to the internet, which was another dimension of big data. All of a sudden, there were all these data, we're all dressed up with nowhere to go without the proper analytics.AI is the third leg of the stool. It's a progression of needing ways to analyze immense data sets over time. As you know, cancer is a genomic disease, however, it's not just a genomics disease—we tend to oversimplify things. That's why having as many layers of data, whether digital or environmental or immunological or other important layers, that orthogonal data perspective is vital.My friend Ryan Langdale called this a “Cambrian moment,” where we have so much data because things have been digitalized and digitized. In November 2022, we had the release of the first models of generative AI and the acceleration/democratization of uptake. When did you start to see that happening? You certainly alluded to it in your book Deep Medicine.Topol: When I was writing Deep Medicine, I was talking to AI experts around the world. They told me there was no model yet such as our current models like ChatGPT (GPT-4), Gemini, and imminently GPT-5, but ultimately it was going to be available. My book came out in 2019, but it took almost 5 years to see the light with ChatGPT because the precursors to ChatGPT, even though this was incubating since 2017 with a classic preprint,2 which was never published as a regular article. It took years to get to this point of having massive graphics processing units, well over 25,000 for GPT-4.It was a natural progression in the background because it wasn't until November 30, 2022, when ChatGPT came out and 100 million people got onto it, and said, “Whoa, this is really something!” This is still just the beginning of where we're headed and it doesn't stop here. The ability now is to take multimodal data, whether it's slides and images, audio from visits with patients or bedside rounds, anything in text. We don't even have medically supervised training, specialized training, or fine tuning yet. We are dealing with base models and they are doing extremely well for medical questions and even medical diagnoses right now.What are the differences between the original models and even Siri, Cortana, Alexa, Google Digital Assistant, infiltrating our lives? You authored a nice article in Science last September3 about this transition to multimodal, writing that computers or machines don't have eyes—but they do! What is the revolution brought about by multimodal models such as Gemini and Bard expanding in the new GPT-4, which are more multimodal than the originals?Topol: The first phase of this kind of AI era was deep learning, deep neural networks, inputs that were largely annotated. Ground truth experts saying it's this or it's that. You can't do that at scale because there are not enough experts to do this labeling. And it costs a fortune to do hundreds of thousands if not millions of images. So we had to get past supervised learning to get to self-supervised, unsupervised, to let the data basically move ahead through artificial neurons, or artificial neural networks. It's training, but not through experts in ground truths now.Next was: how do we go from one task like an image to multiple tasks? That was the basis of the transformer model I mentioned beginning in 2017, because instead of going back and forth with each word in a sentence or a paragraph—a recurrent neural network, a type of deep neural network—it could do the whole thing. It had the context and soon enough, it wasn't just words, it was videos and images and of course speech. So that was the big change—to improve on old school deep neural networks around 2015. And by 2020 it had really been validated to go to the next big jump, which are these multimodal, self-supervised, or unsupervised. That's taken us with enormous computing requirements to where we are right now.Recently in Harvard Business Review, Ron Adner and James Weinstein wrote about the Napster model.4 We've got a log jam now and we haven't proved the return on investment. We know it is going to be there, but health care administrators are struggling with this. As they discussed the Napster model solved the question that the music companies had, which is how do we monetize this? How do we make this digital? It opened a brand-new ecosystem. That's where health care is going. If you're a health care administrator (like me), where would you suggest systems start? Because right now they're stuck between Siri and Skynet!Topol: Well, you're alluding to the fact there hasn't been much implementation of AI and health care to date. There are over 650 algorithms that are cleared or approved by the FDA. Most of those are deep learning, unimodal one-task algorithms; no transformer models or multimodal has been cleared or approved by FDA. There's no transparency—as a medical community, administrators or whoever's making decisions, we can't even review the data. Most of these haven't ever been published, and if they are published, they're proprietary and nontransparent. So, we have a problem.The other issue, of course, is things that may not need FDA clearance, which is a good thing. So, if I was an administrator right now, I want to undo the damage done with electronic health records (EHRs), where EHRs—if it didn't ruin the patient–doctor relationship and hurt doctors' and nurses' lives, it sure didn't do any good.Most clinicians hate data clerk work because it takes them away from what they really want to do—caring for patients—and it eats up hours when they're not seeing patients. We can move now without any FDA approval to a synthetic note. And it's not just the note from the conversation with a patient in clinic (adjusted for articulating the physical examination) because otherwise that would not happen during the visit. But just with that little addition, everything else is put into notes that are far better than what you see in Epic, Cerner, and others. Once you have that note digitized, it does all the other things such as preauthorization and billing, follow-up appointments, and prescriptions. It even nudges the patients—Did you check your blood pressure? What were the results? Did you go for the test that was ordered?It also coaches physicians to be more sensitive and empathetic reading the notes saying, why did you interrupt the patient after X seconds? Patients really like this because they have the audio recording and to clear up any confusion or things they forgot; they can go from the note to the link to the audio file. So this is the future and it's now and it's going to take over in the next couple of years. Administrators who want to make their clinician group happy and patients being able to see literally face-to-face their clinicians might want to think about trying these things out if they haven't already or wide-scale adoption as some health systems are already doing.Coding and billing, rev cycle management, rote administrative tasks, fighting with insurance companies—those could all be done with generative AI just a bit of training. Let's talk about upending cancer screening. Using age is truly an anachronism in many ways. How does that change the way that we think about screening cancers using polygenic risk scores and AI-driven algorithms?Topol: This is something I feel strongly about—we've got this all wrong. We're only picking up 12–14% of all cancers that are being diagnosed through mass screening. It's wasting tens of billions if not hundreds of billions of dollars every year. It's inducing a lot of anxiety for all the false positives as in mammography, but also other screening. And it's all based on age, which is so dumb. Now when you start to think about the fact that cancers occurring in younger people, much more commonly now, people in their 20s are coming across with colon cancer and women with breast cancer in their 30s. If you just use the current criteria, we're going to miss these people. So is there a better way?I am convinced there is. There are layers of data that would define the risk of each individual. We've already seen just for pancreatic cancer using data sets from Denmark and the U.S. Veterans health data set, that you can pick up cancer risk from the notes, laboratory tests, we wouldn't [otherwise] see the trends. Then you start adding unstructured text and polygenic risk scores, which are very inexpensive to obtain and we have for most common cancers. We can define risk and with AI picking up things in images that we can't see. If we start to reboot how we do cancer screening, I think we're going to get to a point where we can narrow down the field.For example, 88% of women will never develop breast cancer. Why do those women need to have mammography every year or two? Let's define risk. Let's not miss young people who are at risk for cancer. We have cell-free tumor DNA tests. There are many ways we can do this, but we can't be complacent about how we do screening now because it isn't working. It's wasteful. And the cancers that are being picked up waiting for some symptoms or scans to be so abnormal, they're often late. And we're not changing the natural history of the cancer. We've got to get better at that too.In the United States, we're missing 94% of patients who are appropriate for screening for lung cancer. Can we screen smarter, not harder? Can we use these machine learning tools to better identify specifically who is at highest risk? The recent large Dutch Pancreatic Cancer Screening trial used a transformer AI model to examine 28,000 pancreatic cancer patients—and outperformed existing models without even using factors such as BRCA or PALB2 or you didn't even have genomics in that trial and it still had an area under the curve of 0.88. That type of data for us clinicians is better than a CT scan or an MRI or a PET. We keep banging the drum. Cancer doctors like me are really tired of finding stage 4 cancers when the tools exist to fix half of them just without invention of a new test.Topol: There was an article last year from deCODE in Iceland in the New England Journal about mutations in cancer genes that if you knew about, could extend lifespan 7 years.5 The leading one was BRCA2. So if you just knew that you were BRCA2 [positive], just that alone to prevent not just breast and ovarian cancer, but all of men's cancers too—we're not taking advantage of what's published in the literature. All this great knowledge is compartmentalized in a different orbit and it isn't being offered to patients. And we've got to get out of that mode because we're losing people by not integrating knowledge into practice.There are emerging AI tools that identify areas that might warrant closer attention from an endoscopist on colonoscopies. You've written extensively about pattern doctors being outpaced by machine learning and by training these machine learning modules to identify things in their faintest footsteps. How do we address this for your medical students?Topol: What's interesting is that the gastroenterologists have led the field of AI doing randomized trials. There were recently 33 randomized trials from around the world, many from China, but now most places around the world have done randomized trials. The pickup of polyps is substantially better when machine vision real time is being used during the colonoscopy. Interestingly, there are also studies that as the day goes on, the gastroenterologist is more likely to miss those polyps. Now, we haven't seen an article yet that shows by picking up the significantly higher rate of adenomas as polyps that it changes the natural history of cancer. But that is pretty likely. We've already seen in 80,000 women randomized with mammography, with AI or without AI, that the AI helped tremendously in accuracy of diagnosis and reducing the time of review of scans.So we're seeing some great compelling evidence for the benefit of the patterns. I would extend that to pathology slides. It's amazing that from a whole slide image, you could get the driver mutations, the structural variations in play, whether it is a metastasis or the primary source of that tumor, and even the prognosis from a slide to a reasonable level of accuracy. We're not using that. We still are in the mode of pathologists who are not in agreement about what the H&E slide shows. We can do better with patterns of slides and of all types of medical images.It is augmented intelligence. We're not replacing radiologists. I still think they're going to be there for quality control and to make sure that we are appropriately interpreting with our whole brain the nuances of these films. You were instrumental in founding the Lerner College of Medicine, and you've devoted a great deal of your entire life's work is to medical education reaching the most people you can. How are we going to bridge that gap for students? They're obviously tech savvy, but you no longer need to memorize the 15 causes of pancreatitis. They have it in their fingertips. They can find it with a quick look with the new Ray Ban glasses. How are we going to train them to be critical thinkers and to ask the right questions, even if it's just designing prompts?Topol: That's a fantastic question because that makes us rethink not only how do we educate, but how do we select future physicians? We used to pick them still today by they have to be kind of brainiacs where really high MCAT scores and GPAs, and they won't even get past the threshold without that. What about their ability to communicate and empathize and connect with other people? I'm hoping that that will change and we'll emphasize that. You're going to have so much knowledge at your fingertips that memorizing everything is not the deal. It's about reasoning and building that ability to build trust, presence with your patients and know that you have their back.I think it's going to be an education that will have to include what is AI, how does it work, what are the liabilities? The fact that it can lose performance over time, that it requires surveillance, that it has issues about bias and inequities and lots of issues about AI that medical students have to understand because they're going to be using it in their daily lives. Once we get the right people in medical school and we got to train the old docs like us too, because most of us have not gotten up to speed. We can go to ChatGPT and play around, but that's just going to be medicine in the future. Everyone in medicine needs to understand the nuances. They don't have to know how to code. In fact, you can get GPT to code for you.What you do need to know is how do you do good prompts? How do you get the output that you really are looking for? Those are the things that we have to learn about as well as when to trust it. If I have any questions with a GPT-4 response, I'll do it over again. It's like double data entry. We have got to learn when we use it that we can trust it. A lot of things are made up now so you have to be savvy enough and require that authentication. We don't want to use something that's faulty, especially involving patient care.Allie Miller draws an of not using existing AI tools in health care to be to with a or the with The tools are and they're very and do we get clinicians do regular doctors and clinicians their into this? do they find the to start to with the that you and I are I think be going to GPT-4 or when the other that are or use I don't do Google for anything important But start to get start to understand better as you use it The other of course, is to get administrators to start to get these conversation tools so they the because that's another way to I think they are going to be one of the most tools that we'll see for a time in of having benefit to the daily of there are some good The book The AI in is a good got to change medicine for the I don't know anything else that has this You mentioned the be because not only do you get the ability to know about every from the but you also get a to disease, disease, and years there are any You can get the scores from the As are we going to be doing and AI already have a that can but soon we'll be seeing things such as diagnosis of through an AI you can get at a and cancers that will get you a diagnosis whether to even see a or a and in The is We're seeing an AI to their with All of things are in the patient because you think AI hasn't everything think of it just for it's also for As we move into the future of medicine, where do you see this going in the next the next couple of I'm hoping we'll start to see cancer screening get We won't have it but at some of the trials are to the old way of doing cancer screening. We will get a diagnosis whether it's because the accuracy of scan in the next couple of years or whether it's because each through their health has to a GPT that them a diagnosis of If you only have 7 for a that's not enough to about a and to of the things we work on is not let the AI make things not let more patients get into any daily a because we've got many making the as you're more get your up even are things we have to in the next couple of years because this is a very if not the most of medicine that we'll see in our But we There will be tools to every of a data you even start to look at their or see the patient and they will be in the next couple of years. It's very for us to keep up with the medical literature. years from now, don't keep up for you the daily if you want on everything in your field because the of medical for you is something that is right in the of generative from different in the of the of to a That's the first time we've really seen We need some to keep us on the Siri and not the How do we do this? It can't be because there will be It's so that I don't know if can keep there a I This is so the to get to that proper ground where you're not but you're not the we don't want to see anything like that. The big is this artificial intelligence and whether we're going to whether we're already there, on how you define it and what can happen when we get to that As a we don't want to that machines can be than us and when that it's very But also we're It's so we haven't really to We what with tools like we had a in doing and he up in have to think about what could go and to and prevent these very This is not going to be the like the haven't even one multimodal AI So we're in the We have to deal with in our of in the AI that when you put something out like a with AI, but you haven't it and people of we got a There are all of that have to be this will get on the right but we're not there AI is a not an the patient be the I that the clinicians to this data take of this and as the in patient care and make sure that these tools are well to have data sets that include and patients or patients of me in We have to make sure that that's the as you look what are you most The is that we that that we've medical school in the I in years medicine in the with what it is now, and it's really I know we can get that We have the of the of time so that medicine much more than and that we are able to our which is for patients and they know that they're being what I'm most It won't happen but we're the of that. We know that and can be is there, and a on the That is the for I can see the accuracy being but that's not We have to have the of this in The Harvard Business article mentioned that AI is a that all The also made the point that the is already coming My is that with like this, such as AI in Precision AI, start to in a to that these tools have real And like the and the who care for the patients to in the of these like us to clinicians to make sure that we are a of the We have to have compelling evidence of benefit that the which the change in We don't have that for the most And that's where your and can really make a that level of evidence because it. And if we don't have it, it will be up Topol Deep Medicine: How Artificial Intelligence can make Healthcare Human New Google Google Topol As artificial intelligence goes multimodal, medical Science Google Adner Weinstein could how health care Harvard Business Google and their with in New Google The AI in Medicine: GPT-4 and Google this J. Topol and Douglas State of Artificial Intelligence in Precision Oncology: An Interview with Eric in Precision of of
Ähnliche Arbeiten
Molecular Cloning: A Laboratory Manual
1989 · 129.982 Zit.
Molecular cloning: A laboratory manual
1990 · 86.157 Zit.
Fiji: an open-source platform for biological-image analysis
2012 · 68.486 Zit.
NIH Image to ImageJ: 25 years of image analysis
2012 · 63.321 Zit.
KEGG: Kyoto Encyclopedia of Genes and Genomes
2000 · 38.177 Zit.