Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Future directions in infertility research: the role of generative AI and large language models
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Male infertility remains an understudied yet significant contributor to global reproductive health challenges, with up to 50% of infertility cases involving a male factor and a large proportion still classified as idiopathic. Recent advances in generative artificial intelligence (AI), particularly large language models (LLMs), offer a transformative opportunity to tackle persistent gaps in understanding the genetic, epigenetic, and environmental determinants of male infertility. This chapter explores the scientific potential of LLMs and related AI technologies in accelerating discov-eries across the male reproductive research continuum - from interpreting complex genomic data and identifying novel gene - environment interactions to enhancing sperm quality assessment and predicting an unborn child's long-term health risks stemming from paternal factors. Real-world examples and emerging case studies illustrate how generative AI can help fertility researchers learn rapidly, synthesize massive volumes of literature, generate hypotheses, design experiments, and reveal patterns that conventional analyses may miss. The narrative further reflects on the implications of using AI to forecast offspring health <i>via</i> polygenic risk scoring and in silico developmental simulations, highlighting both technical promise and ethical considerations. Written from the perspective of a computational scientist collaborating with fertility experts, this chapter demonstrates how interdisciplinary approaches, amplified by LLMs, can lower barriers between computer science and reproductive biology. By embracing generative AI responsibly - with attention to data quality, interpretability, and social responsibility - male infertility researchers stand poised to unlock novel insights that will benefit not only current patients but also future generations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.