OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.03.2026, 01:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial Intelligence in Health

2024·3 Zitationen·NMO journalOpen Access
Volltext beim Verlag öffnen

3

Zitationen

2

Autoren

2024

Jahr

Abstract

Artificial intelligence (AI) reshapes health care by enabling data-driven insights, diagnostic advancements and therapeutic innovation. AI’s potential in health care aligns with global objectives, such as the Sustainable Development Goals, to enhance healthcare access and quality worldwide.[1] Through the ability to simulate human-like reasoning, AI tools contribute to preventive and curative care, addressing challenges within healthcare systems and improving patient outcomes. AI tools, particularly large language models (LLMs), are often used in health care to analyse massive datasets. However, these LLMs present substantial privacy risks, as they must access vast quantities of personal data to learn effectively. Managing these privacy concerns is essential, as LLMs may inadvertently expose sensitive patient information. Data security and patient confidentiality are critical in leveraging AI while maintaining trust between healthcare providers and patients.[2] LLMs and other AI models in healthcare applications must be designed with privacy safeguards to mitigate these risks, ensuring they comply with data protection standards and healthcare privacy regulations. Ethical concerns about AI in health care extend beyond privacy issues. The complexity of using AI models to process patient data raises questions about bias, accountability and interpretability. LLMs, particularly those designed for healthcare applications, face scrutiny for their potential to reinforce social biases, such as those based on race or socioeconomic status.[3] For example, training AI on datasets that lack diversity can lead to biased outcomes, where algorithms may underrepresent or misinterpret the health needs of certain demographic groups. Addressing these issues requires a rigorous approach to AI model development and validation to ensure inclusivity and ethical alignment with healthcare standards. Integrating AI tools in health care, especially large-scale models such as deep neural networks also presents technical challenges. AI models can display unintended biases arising from linguistic, cultural or social differences. Cross-linguistic bias, for example, can impact the performance of language-based healthcare AI applications when used across regions with diverse dialects and language structures.[4] This challenge underscores the importance of customising AI applications to suit regional and cultural contexts, particularly in healthcare settings that rely on precise language for diagnosis, patient communication and treatment adherence. In addition to cross-linguistic considerations, AI models must be evaluated for potential biases in gender and occupation-related assumptions. For instance, a study found that AI systems could reinforce stereotypes when not properly monitored, such as associating specific occupations with certain genders.[5] In health care, such biases may lead to disparities in treatment recommendations or care quality, underscoring the need for stringent bias assessments during model training and validation. To develop unbiased and fair AI systems, healthcare data used in training must represent a broad demographic spectrum, avoiding skewed outcomes and supporting equitable healthcare practices. As AI applications become increasingly powerful, the ethical debate surrounding AI’s capacity for ‘intelligent’ actions has intensified. Critics argue that large AI models, sometimes labelled stochastic parrots’, can amplify existing social and ethical biases due to their reliance on massive datasets.[6] These biases could have far-reaching effects if not carefully managed. For instance, biased AI recommendations could potentially lead to erroneous diagnoses or treatment plans in clinical settings. Thus, healthcare providers and regulators must prioritise fairness and accountability by monitoring AI models’ ethical implications and implementing strategies to mitigate unintended biases. Despite these challenges, AI holds transformative potential in health care. AI has been widely applied in diagnostics and primary care, helping to enhance diagnosis accuracy, streamline workflows and improve patient monitoring. In primary care, AI aids in detecting conditions early, thereby enabling timely interventions that can significantly improve the patient outcomes.[7] Moreover, AI’s role in primary care and family medicine settings highlights its capacity to support frontline healthcare providers by analysing the data and offering decision-making support in real time. This assistance can alleviate some burdens on healthcare providers, allowing them to focus on more complex clinical interactions and patient care. However, while AI shows promise, its impact in low-resource settings raises additional considerations. AI could help bridge healthcare gaps in regions with limited healthcare infrastructure by providing decision support and diagnostic tools. However, deploying AI in these settings requires careful planning to ensure it complements rather than replaces the human healthcare workforce.[8] For instance, the use of AI in diagnostics must be adapted to account for local healthcare needs, addressing specific public health challenges such as infectious disease outbreaks or chronic illness management in underserved communities. Ensuring AI’s alignment with local healthcare infrastructure is critical to sustainable adoption in resource-limited environments.[9] A global perspective on AI in health care has highlighted the need for an ethical and regulatory framework to guide its application. The World Health Organization (WHO) has published a digital health strategy that underscores principles such as patient autonomy, transparency and accountability in AI applications.[10] According to the WHO, AI tools should augment rather than replace human healthcare providers. This approach aligns with the ethical standards of ensuring that technology improves healthcare delivery without compromising the personal, empathetic elements essential to patient care. The WHO guidelines are especially relevant as AI applications expand across medical specialties, from radiology to mental health. While AI has made numerous contributions, its ethical and practical limitations remain barriers to widespread adoption. One of the primary issues is transparency, as healthcare providers and patients may be reluctant to rely on opaque AI systems whose decision-making processes are challenging to interpret.[11] The WHO emphasises the importance of transparency in AI development, urging policy-makers to ensure that AI systems used in health care are designed with explainable algorithms. This approach is essential in building trust in AI tools, especially in high-stakes environments like clinical decision-making. In conclusion, AI’s future in health care is promising, yet careful considerations regarding ethics, privacy and equity must guide its continued development. The WHO’s ethical guidelines underscore the importance of human-centred AI, where technology enhances rather than replaces the healthcare provider’s role. By addressing these ethical concerns, refining AI models to minimise bias and tailoring applications to specific healthcare settings, AI can fulfill its potential to transform health care while upholding the core principles of compassion and patient-centred care.

Ähnliche Arbeiten