OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.04.2026, 19:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Computational medicine: Grand challenges and opportunities for revolutionizing personalized healthcare

2023·3 Zitationen·Frontiers in Medical EngineeringOpen Access
Volltext beim Verlag öffnen

3

Zitationen

1

Autoren

2023

Jahr

Abstract

Medical advances in treatment strategies against age-related global killers such as cardiovascular disease, cancer, and stroke have been responsible for the significant gains in the average global life expectancy observed since the second half of the 20 th century (1). Still, medical research has been less successful at prolonging healthy life. Globally, one in three adults live with multiple chronic conditions (2). In the US, 80% of the population older than 65 live with at least one chronic condition, and 50% live with two (1). As the aging population is growing rapidly, the incidence of age-related, costly, chronic conditions such as heart disease, cancer, diabetes, and Alzheimer's is reaching epidemic proportions. In the US alone, healthcare spending is already composing far more of the national gross domestic product than any other sector including defense, education, energy, and transportation (3). With annual total costs of age-related diseases expected to skyrocket, all nations are in pressing need to reduce the economic burden of population aging. Prolonging lifespan without prolonging health span is financially unsustainable for all nations.Computational medicine emerged in the past decade as an interdisciplinary field dedicated to integrating advanced computational modeling, data-driven technologies, and supercomputing to derive new knowledge about the biological mechanisms of disease and deeper understanding of factors driving inter-patient variability (4). Such knowledge enables development of precision strategies to diagnose and treat disease, sustain wellbeing, and optimize utilization of healthcare resources (5). Computational medicine has the potential to drive transformative advances in healthcare, extend health span, and reign in healthcare costs by (i) enabling a more holistic understanding of the broad spectrum of all factors, processes, and their interplay impacting well-being at the individual and the population level, and by (ii) translating such understanding into dynamically adaptive, personalized medical decisions to drive effective and sustainable health management practices. There are already several efforts demonstrating the potential of computational medicine across various diseases and conditions (e.g., [6][7][8][9][10][11][12].To achieve its full potential, computational medicine should be able to build a digital twin of the human by mapping the human genome (i.e., genomic profile), phenome (i.e., physiologic status), and exposome (i.e., physical and social environment) in real-time and across the human lifetime. Understanding the human genome-phenome-exposome interplay is an ambitious endeavor which demands a multi-disciplinary team of biologists, physicists, chemists, engineers, mathematicians, computer scientists, and data scientists. Sitting at the intersection of these scientific domains, computational medicine faces grand challenges in three key areas:Multiscale computational modeling of complex biological systems has been an active field of research, leading to notable advances in both mechanistic and agent-based disease models. The overarching goal is quantitative representation of interconnected biological processes that cannot be easily delineated experimentally. Such model should be able to capture the spatiotemporal interdependencies across all scales -from genomic, transcriptomic, proteomic, and metabolomic scales to tissue and organ level, and ultimately to the individual and population levels (13,14). Recently, the interleaving of traditional computational modeling and simulation with artificial intelligence approaches has emerged as a strong theme to complex systems modeling (15,16), with several applications in drug design, structural biology, and neuroscience to name a few (e.g., [17][18][19][20][21][22].Since system complexity increases when bridging different scales, there are technical challenges as the number of system parameters rapidly increases. Furthermore, the integration of computational representations across different organ systems remains an outstanding challenge (23). Developing efficient computational tools that can manage multi-modal, multiscale data as well as demonstrate effective utilization of high-performance computing resources are critical for us to overcome current barriers.One outstanding and ever-increasing challenge with complex multi-scale computational models and the latest artificial intelligence models known as language models or transformers (24,25) is the growing demand for computational power (18). Model training, hyper-parameter optimization, uncertainty quantification, and validation become exponentially more demanding as efforts move from single to multiple scales (26). As energy-efficiency becomes a bottleneck for large scale computational science, efficient algorithmic development will be necessary for scalable computational medicine. It has been proposed that a modular approach to multi-scale computational medicine, with interoperable and reusable computational tools, mimicking the first principles computational chemistry and physics modeling approaches maybe appropriate as it has been very successful in materials science (27). Although biological systems differ from physical systems, there is potential in investing in such effort to derive important building blocks that bridge a few spatiotemporal scales. Still, standards for reproducible research in computational medicine are lacking. Creating and maintaining data and model repositories are critical for ensuring reproducibility and reliability. As many of these endeavors become computationally very intensive, the burden of reproducibility is immense for the average researcher. We need to invest substantial resources in compute-and-data infrastructures to scientific integrity, reproducibility, and reliability of data and models.The breadth of technical and algorithmic challenges exemplifies the need for strong collaborations across disparate scientific domains, across different and also competing approaches, as well as across different stakeholders. For example, in 2016, the National Cancer Institute (NCI) and the Department of Energy (DOE) in the US partnered in a collaboration to accelerate advances in predictive oncology. The collaboration brought together multidisciplinary experts in the biological, computational, data, and physical sciences to develop, demonstrate, and disseminate advanced computational capabilities that help answer driving scientific questions across molecular, cellular and population scales (28). The community effort is growing by adding new scientific challenges that build upon strong collaborations (29).The concept of the digital twin has gained a lot of traction within computational medicine (30,31). The digital twin is a virtual representation of a patient as a multi-modal system which incorporates patient data to inform personalized medical decisions related to disease prediction, diagnosis, therapeutic interventions, and prognosis. A digital twin can be created at different levels of detail (e.g., organ, individual, population, healthcare system) using various data sources as they become available. Merging non-traditional data sources (e.g., environmental factors, socioeconomic conditions, lifestyle choices) with multi-scale patient data, digital twins offer in silico modeling of patient health trajectories by taking into account the complex interplay of all factors and processes that affect wellbeing. Such models can be regularly interrogated to explore different scenarios (e.g., different treatments, lifestyle choices) to predict future risks and outcomes and empower individuals to make decisions at critical times and from the earliest stages in life. Furthermore, such longitudinal models can be dynamically adaptive and updated as new data becomes available. The potential to execute "what-if" scenarios completely in silico can be very empowering for patients, physicians, researchers, and healthcare systems as each tries to optimize outcomes based on individual criteria and incentives. References (32,33) provides an insightful discussion of the potential of digital twins in the future of medicine while disease-specific examples are emerging (34,35). The clinical implementation of the digital twin in computational medicine is still in its very early stages facing similar challenges as those outlined in the previous section, namely data quality, data integration, reproducibility, reliability, as well as continuous quality assurance due to its dynamic nature.With computational medicine's great promise comes an even greater responsibility. We must recognize the pitfalls and possible ill-intended uses of the computational models. Since these models rely heavily on patient data, there are many legal and ethical considerations related to data collection, sharing, and use. Access to large amounts of patient data is fundamental to understand individual and population-level health outcomes over time. However, liberating and providing access to patient data is both a technological and a policy challenge. Furthermore, to create a richer picture, medical data must be combined with other data points to provide context on a patient's living conditions that has substantial implications in predicting patient health trajectories. Although we all recognize the scientific value of human data, the debate over data ownership is ongoing in terms of how best to balance the promise of transparent innovation with the risks of unethical data handling, intentional or unintentional privacy breaches, and adversarial data use by hostile or malicious actors (36). To maintain a strong ethical framework for computational medicine, we need to answer this fundamental question: Who owns the intellectual property of data-driven computational models in healthcare? The patient? The medical center collecting the data by providing the healthcare services? Or the model developer? Clearly, no single entity alone could deliver the breakthrough technology.Next is the topic of data trustworthiness. There is plenty of evidence that low data quality and problematic data representativeness can compromise model validity (5) by creating or exacerbate existing racial or societal biases in healthcare systems (37). During the development phase, scientists should promote a rigorous statistical framework to monitor for potential biases in the collected data. During deployment phase, model developers should implement rigorous quality control, monitoring model performance across subgroups to confirm robust performance or identify performance gaps. We should work to communicate to patients and healthcare providers openly and clearly what they should expect from the technology so that they are informed consumers of the technology.Other than data quality, data-driven models benefit from access to large volumes of representative medical data. With the explosive growth of large-scale deep learning models (38), the need for data sharing is pressing. State of the art deep learning models have billions of parameters. Although these models can push boundaries in learning and generalizability (39,40), they require massive amounts of training data due to their large parameter space. Federated learning has emerged as a successful collaboration mechanism to address the privacy constraint with sensitive data sharing. Instead of sharing data, the collaborative entities share the model parameters after local training and fine tuning (41). Several studies demonstrated that models trained using federated learning are as accurate as those trained using centrally hosted data sets and they are far more accurate than models trained with single institution data. Nevertheless, federated learning is in its early stages of technical development. There are still outstanding concerns with reverse engineering of the trained "super-model", and legal implications if the model is broadly shared and used without proper authorizations in clinical care (42,43).Another grand challenge is that the clinical translation of computational models is not straightforward. If the computational models are used for hypotheses generation or for knowledge discovery, they are easier to embrace as they are used as scientific instruments. However, if computational models are used to perform or assist clinical tasks, then acceptance expectations are much higher. Often these models must be regulated, and they need to clearly demonstrate efficacy (i.e., performance equivalent to medical experts) and safety. Previous experience with AI for clinical decision support demonstrated that AI is capable of performing narrowly defined, repetitive tasks exceptionally well. Still though, if physicians over-rely on such decision support technologies, they may lose critical skills needed for performing more difficult tasks (44). Another challenge with clinical integration is ensuring that the computational model is capable of assessing its confidence (i.e., uncertainty quantification) and providing a justification (i.e., explainability) for its prediction (45,46). Since one of the key drivers of the 'digital twin" is to empower the patients as they try to manage their disease and gain better understanding of the short-term and long-term implications of the decisions they need to make, how to convey the model's reasoning and prediction confidence to the patient vs. the healthcare provider is an understudied topic.Ultimately, humans and computational models will have to work well together. But this synergy won't happen organically, as past health AI experiences have demonstrated. It is important to train both healthcare providers and patients in how to use computational models responsibly, and how to remain vigilant avoiding mistakes of over-reliance when supported by the models. Objective benchmarking of datasets and models against community consensus metrics to detect, monitor, and possibly correct dataset biases or inconsistent model performance must become part of the practice of computational medicine.The convergence of personalized digital health technologies, computing power, and artificial intelligence have ushered a new era in healthcare delivery. Computational medicine holds immense promises for personalized disease management, from diagnosis, to treatment, to prognosis. Furthermore, computational medicine has the potential to offer much needed relief in healthcare costs by enabling deeper understanding of the interplay between biological and socioeconomic drivers to promote personalized proactive healthcare approaches, delay onset of chronic diseases, and prolong wellness. There are already numerous successful examples of computational medicine from the bench to the bedside, however several challenges remain to fully realize the potential of computational modeling and personalized data in clinical practice. The complexities of multiscale system modeling, integration and analysis of multimodal personalized data, longitudinal modeling and dynamic system optimization to maximize personal and population level outcomes, as well as practical issues in terms of clinical integration and safe utilization at scale are grand challenges. To support the realization of computational precision medicine we need engagement and collaboration of many scientific domains, given the truly interdisciplinary nature of this endeavor.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcareHealth, Environment, Cognitive AgingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen