OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.05.2026, 01:30

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The emerging use of artificial intelligence in safety pharmacology & toxicology

2026·0 Zitationen·Journal of Pharmacological and Toxicological MethodsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

Safety pharmacology is concerned with the identification and characterization of adverse effects of drug candidates on vital organ systems. The emergence of artificial intelligence (AI) and machine learning (ML) has prompted growing interest in their potential application to nonclinical drug safety evaluation across the core battery cardiovascular, central nervous, and respiratory systems. This review traces the historical development and technical foundations of AI, from early neural network research and backpropagation algorithms to the emergence of modern frontier large language models, and examines how these technologies are being applied to safety pharmacology study endpoints including proarrhythmic risk assessment consistent with International Council for Harmonisation (ICH) S7B and Comprehensive in vitro Proarrhythmia Assay (CiPA) frameworks, seizure liability detection via microelectrode array analysis, and respiratory function monitoring through whole-body plethysmography. The applications of AI in a broader toxicological assessment, including multi-endpoint toxicity prediction, digital pathology, and federated learning consortia, are also reviewed. To gauge current adoption and attitudes within the discipline, a survey of Safety Pharmacology Society (SPS) members was conducted at the 2024 annual meeting (N = 89). The survey revealed that 57% of respondents were not currently using AI tools, although 44% of non-users planned adoption within the following year; 84% of respondents intended to apply AI in preclinical safety development. The evolving regulatory landscape, including the 2025 United States Food and Drug Administration (FDA) draft guidance on AI credibility and the 2026 FDA/European Medicines Agency (EMA) joint guiding principles, is discussed alongside challenges related to data quality, model interpretability, and validation requirements. The findings indicate that while AI tools show promise for specific applications such as structure-based toxicity prediction and automated signal analysis, the safety pharmacology community appropriately demands rigorous validation before integration into regulated workflows. The challenges of model interpretability, data quality, and the absence of prospective validation studies represent substantive barriers that must be addressed through collaborative effort among industry, academia, regulatory agencies, and scientific societies. AI in safety pharmacology is currently best positioned as a complementary analytical tool that may help avoid investing resources in compounds with predictable safety liabilities, rather than as a replacement for expert scientific judgment.

Ähnliche Arbeiten