Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The future of fundamental science led by generative closed-loop artificial intelligence
0
Zitationen
20
Autoren
2026
Jahr
Abstract
Artificial intelligence is approaching the point at which it can complete the scientific cycle, from hypothesis generation to experimental design and validation, within a closed loop that requires little human intervention. Yet, the loop is not fully autonomous: humans still curate data, set hyperparameters, adjudicate interpretability, and decide what counts as a satisfactory explanation. As models scale, they begin to explore regions of hypothesis and solution space that are inaccessible to human reasoning because they are too intricate or alien to our intuitions. Scientists may soon rely on AI strategies they do not fully understand, trusting goals and empirical payoffs rather than derivations. This prospect forces a choice about how much control to relinquish to accelerate discovery while keeping outputs human relevant. The answer cannot be a blanket policy to deploy LLMs or any single paradigm everywhere. It demands principled matching of methods to domains, hybrid causal and neurosymbolic scaffolds around generative models, and governance that preserves plurality and counters recursive bias. Otherwise, recursive training and uncritical reuse risk model collapse in AI and an epistemic collapse in science, as statistical inertia amplifies flaws and narrows the investigation. We argue for graded autonomy in AI-conducted science: systems that can close the loop at machine speed, while remaining anchored to human priorities, verifiable mechanisms, and domain-appropriate forms of understanding.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- Turing Institute(GB)
- King's College - North Carolina(US)
- King's College London(GB)
- University of Cambridge(GB)
- British Library(GB)
- The Francis Crick Institute(GB)
- The Alan Turing Institute(GB)
- Centre for Sustainable Healthcare(GB)
- Karolinska Institutet(SE)
- Living Systems (United States)(US)
- Universidade Estadual de Campinas (UNICAMP)(BR)
- National Energy Research Scientific Computing Center(US)
- Simulation Technologies (United States)(US)
- University of Minnesota(US)
- University of Southampton(GB)
- Goldsmiths University of London(GB)
- University of Edinburgh(GB)
- Loughborough University(GB)
- Keio University(JP)
- RIKEN Center for Biosystems Dynamics Research(JP)
- Eiken Chemical (Japan)(JP)
- Jožef Stefan Institute(SI)
- University of Oxford(GB)
- United States Army Combat Capabilities Development Command(US)
- Cornell University(US)
- University of Birmingham(GB)
- University of Chicago(US)
- Okinawa Institute of Science and Technology Graduate University(JP)
- Chalmers University of Technology(SE)