Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Opening a new chapter in health care: reporting on the inauguration of the International Conference on AI in Medicine
3
Zitationen
2
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI) continues to transform society in ways beyond the revolutions that foreshadowed it, namely the industrial, internet and digital revolutions. While being unique, the ‘AI revolution’ offers significant opportunities and presents inimitable challenges with important ethical considerations different to the eras preceding it. This is best exemplified by its emerging role in medicine, transforming healthcare delivery, improving patient outcomes and changing the way we diagnose and treat disease. Through advanced algorithms and machine learning, AI enables more accurate imaging analysis, earlier disease detection, better risk assessment, more personalised treatment and management, and many other breakthroughs such as robotic surgery and drug discovery. It facilitates the use of virtual assistants and chatbots, enhancing patients’ and care providers’ accessibility to healthcare information. With predictive analytics, it empowers healthcare providers to optimise scarce resource allocation. Excitement, discoveries and applications around AI in medicine continue to increase, progressing at a breakneck speed, with nobody being able to accurately predict what the future holds. Despite the uncertainty, AI in medicine possesses the power to alleviate some of the field’s grand challenges: containment of healthcare costs, more efficient diagnosis, precise individualised treatment, and prevention of clinician shortages and physician burnout. Many healthcare systems globally see imminent opportunities to reduce administrative burden and enhance operational efficiency, citing improving clinical documentation, structuring and analysing data, and optimising workflows as key priorities. The greatest hurdles to keeping up with rapid technology development include resource and cost constraints, acquiring acceptance and trust of healthcare providers and recipients, and deliberations and consensus of ethical, regulatory and legal considerations. To address this myriad of questions, facilitate debate and mutual understanding, and foster collaboration, Lee Kong Chian School of Medicine (LKCMedicine), Nanyang Technological University (NTU), Singapore, in partnership with College of Engineering, NTU, and National Healthcare Group (NHG), Singapore, held the inaugural International Conference on AI in Medicine (iAIM) in August 2023 [Figure 1]. Over 3 days, more than 600 delegates from Singapore and international institutions participated in robust discussion and debate on the latest transformative research and applications of AI in medicine, including regulatory and ethical implications. The congress included 12 keynote lectures, two panel discussions, several parallel symposia and numerous abstract poster presentations. Here, we summarise the key points from iAIM keynote presentations.Figure 1: International Conference on AI in Medicine.IMPENDING CHANGE OF CULTURE IN HEALTH CARE The conference opened with Professor Chin Jing Jih (Chairman Medical Board, Tan Tock Seng Hospital [TTSH], Singapore) addressing ‘Medical practice in the era of AI: a rapidly changing landscape’. In his talk, Prof Chin spoke about AI representing a ‘marathon without a finishing line’ and its already transformative impact in radiology, emergency and cardiovascular medicine, sepsis and diabetes mellitus in hospital care. He also emphasised the importance of developing intelligent healthcare systems, including the need for an ‘AI culture’ within the medical workforce, and having a fundamental change in workflow and professional culture, more than technological advancement, for successful implementation. This was followed by a fireside chat with Dr Andrew Ng (Founder of DeepLearning.AI). Dr Ng depicted AI as a general purpose technology in recent future, with omnipresence and omnipotence much like electricity, and suggested that a timeline to its full adoption needs to be carefully considered, balancing rapid developments with the need for ‘thoughtful, responsible and transparent’ applications. He also outlined the required alignment of values among academic development, scientific discovery and patient care, and emphasised that while randomised controlled trials (RCT) remain the gold standard evidence for medical interventions, it may not be practicable to insist on RCT for all AI technologies due to the fact that AI systems are increasingly complex, with change so rapid that RCT may not be completed by the time another advancement emerges. The ‘ethics of ChatGPT’ was addressed by Professor Julian Savulescu (Director, Centre for Biomedical Ethics, National University of Singapore [NUS], Singapore), who impressed upon the audience the importance of justifying credit for the use of AI in academic publications and the importance of transparency. He also emphasised that while the model of a doctor–patient relationship may evolve with use of AI, the ethics of delegating medical decisions to a large language model must be tackled. TRANSFORMATIONS FROM DRUG DISCOVERY, CLINICAL SERVICE TO MEDICAL EDUCATION Dr Le Song (Chief Technology Officer and Chief AI scientist, BioMAP, USA) spoke on ‘Pretrained AI models for target discovery and drug design’, outlining the rapid increase in model sizes, data volume, complexity and diversity, as well as how AI is demonstrating key capabilities in identifying drug targets, protein and antibody structure, with efficiency in exploration of complex biological systems in high-dimensional space. Professor Kendall Ho (Medical Director, Healthlink-BC emergency iDoctors in Assistance [HEiDi], University of British Columbia, Canada) pointed out the importance of ‘Digitalisation of Emergency Medicine’ to reduce health inequity and empower patients who require emergency hospital service. Citing HEiDi as an exemplar, he outlined the immense success the programme has brought to the Canadian health system, especially in relation to optimal telehealth, its hospital at home programmes and contactless sensing for vital signs, with the system having been proven to reduce over 60% emergency service requirements without jeopardising patient outcome. Professor Wong Tien Yin (Founding Head of Medicine, Tsinghua University, China) spoke on ‘Transforming medical education and physician training in the era of AI’. He spoke about the need to transform current curricula to be ‘fit for purpose’, potentially including AI as a new academic subject, as future doctors will need to know how to use, interpret and potentially explain AI. With its potential to democratise health care and education, AI represents an important global opportunity. A talk on ‘State of the art in AI for gastroenterology/endoscopy’ was delivered by Professor Prateek Sharma (President-Elect, American Society for Gastrointestinal Endoscopy), who demonstrated the use of new technology in endoscopy while emphasising that digitalisation is key in all current applications of AI in gastroenterology practice. He further showed that AI can assist in locating lesions, perform optic biopsies for diagnosis, assess the risk of endoscopic treatment, provide quality assurance of endoscopy and generate reports for endoscopists to streamline procedures for efficiency. A fresh concept of using a patient’s own data to optimise care through AI was the focus of Professor Dean Ho (Director, The institute for Digital Medicine, NUS) in his keynote entitled ‘Medicine made for you’. He provided case examples using digital avatars and digital therapeutics in the fields of transplant medicine and cancer. This new perspective bypasses the need for large data volume and circumvents concerns of data privacy, with the patient’s own physiological data (i.e. blood biomarker for cancer) possibly guiding individual therapy and determining the best medicine and the optimal dosage. Professor Tao Da Cheng (University of Sydney, Australia) spoke on ‘Foundational models and opportunities in medical image analysis’, illustrating the practical example of his studies on human posture for development tracking, elderly care, sport optimisation and rehabilitation. SETTING THE BOUNDARIES AND FRAMEWORK FOR AI TECHNOLOGIES IN HEALTHCARE SERVICES In view of the disruptive impact of AI in medicine, values, ethics and legal issues need to be resolved. Collaboration among the three medical schools and three hospital clusters across Singapore has produced a series of position statements from the Singapore Working Group on the use of AI in medicine. This was delivered by Professor Joseph Sung (Dean, LKCMedicine, NTU), who outlined the 14 statements in eight categories devised by the multidisciplinary group, including explainability, feedback, provider and receiver autonomy, values, equity, affordability, accessibility, engagement, doctor–patient relationship, trust, legality, evidence and governance. Professor Tan Chorh Chuan (Executive Director, Office for Healthcare Transformation, Ministry of Health, Singapore) further elaborated on the use of ‘AI in future healthcare systems’ by discussing data revolution, digitalisation and the future healthcare system, both in Singapore and beyond the country’s borders. Prof Tan emphasised that facilitating AI applications in health systems will necessitate firm foundations, interoperability, integration, scalability and trust while ensuring data privacy and security. Across continents from North America, Dr Aleksandra Mojsilovic (IBM Research, USA) emphasised the need for ‘Trustworthy, safe and beneficial models’. She pointed out that trust in foundation models may be different from trust in other traditional data science: hallucinations, lack of factuality or faithfulness, lack of source attribution, inability to reason, privacy leakage and misinformation are just some of these new challenges. To build trust, different detectors, guardrails and mitigators will be required for system monitoring. To govern AI effectively, the systems must ensure that trust and safety remain multifaceted and include multiple stakeholders. This includes the following: (a) establishing policy and regulation to define what AI should or should not do; (b) setting up industrial standards to establish common definitions, guardrails and standards; (c) defining frameworks and best practices to encode policies into business rules and guidelines; and (d) adding tools and technologies to support them. ENSURING THE JOB SECURITY OF HEALTHCARE PROFESSIONALS The iAIM included two panel discussions. The first panel addressed the question, ‘Will AI affect the lives and jobs of medical professionals?’ and was chaired by Professor May Lwin (Chair, Wee Kim Wee School of Communication and Information, NTU). It included a distinguished panel consisting of Dr Tan See Leng (Minister for Manpower and Second Minister for Trade and Industry, Singapore), Professor Kenneth Mak (Director-General of Health, Ministry of Health, Singapore), Professor Miao Chun Yun (Chair, School of Computer Science and Engineering, NTU) and Dr Zhou Lihan (Co-founder and Chief Executive Officer, MiRXES, Singapore). The increased opportunities for healthcare manpower with AI were discussed, including its acceptance among healthcare professionals and the observation that technology was ahead of policy. Industry perspectives were shared, including the view that costs of generating data are increasingly less than those associated with storing it. It is important to heed prudence when assessing AI’s benefits, but these technologies should be embraced but should never replace the ‘human’ aspect of medicine. Both Dr Tan See Leng and Prof Kenneth Mak reassured participants that healthcare providers’ roles within the system will not be replaced or displaced. The second panel, chaired by Dr Ng Yih Yng (Director, Digital and Smart Health Office, TTSH and Central Health Region, Singapore) and comprising Associate Professor Michelle Jong (Group Chief Education Officer, NHG), Professor Wong Tien Yin and Professor Simon Kitto (Visiting Professor, LKCMedicine, NTU), discussed the importance of ‘Medical education for future healthcare providers’, with a focus on what will be needed in medical education in relation to AI, separating the ‘must knows’ from the ‘good to knows’ when designing curriculum for future healthcare professionals. To stay in the loop and ensure centrality of the doctor–patient relationship, we need to start training doctors, allied health workers and even medical students now, before system change ensues. While it is hard not to catch AI ‘fever’, there are significant challenges, known and unknown, that lie ahead. We hope that future iterations of the iAIM meetings will continue to serve as a key global platform to address new opportunities, emergent challenges and collaboration across institutions for better patient outcomes. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.