OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 01:13

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Frameworks for using Large Language Models in Toxicological Risk Assessment

2025·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) (e.g., ChatGPT, Google Gemini) have the potential to increase the speed, comprehensiveness, and overall quality of toxicological risk assessments. These models can efficiently extract and collate data from documents as well as draft reports and summarize studies. However, a number of technical limitations of such models have been reported (e.g., inconsistent outputs, prompt sensitivities, hallucinations, and lack of transparency) that need to be overcome before LLMs can be effectively used in this context. To address these challenges, a framework outlining the use of LLMs to support toxicological assessment associated with exposure to chemicals or perturbation of the biological targets (such as receptors) has been developed. This framework addresses the limitations of LLMs and is based on input from an industry consortium as well as case studies. An example illustrating the use of the framework to assess target biology will be presented, covering the major steps in the process from data capture to evidence interpretation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Effects and risks of endocrine disrupting chemicalsComputational Drug Discovery MethodsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen