Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Epistemic compression in large language model explanations of the gut–liver axis
0
Zitationen
5
Autoren
2026
Jahr
Abstract
LLM explanations of the gut-liver axis are susceptible to epistemic compression driven by narrative fluency rather than factual error. Readability does not reliably indicate epistemic robustness in decision-adjacent contexts. These findings support shifting evaluation and governance from platform comparison toward concept-conditioned requirement engineering that enforces provenance, calibrated uncertainty, and explicit separation of correlation, mechanism, and actionability as generative outputs approach clinical relevance.
Ähnliche Arbeiten
DADA2: High-resolution sample inference from Illumina amplicon data
2016 · 34.485 Zit.
Reproducible, interactive, scalable and extensible microbiome data science using QIIME 2
2019 · 22.762 Zit.
Introducing mothur: Open-Source, Platform-Independent, Community-Supported Software for Describing and Comparing Microbial Communities
2009 · 21.474 Zit.
Naive Bayesian Classifier for Rapid Assignment of rRNA Sequences into the New Bacterial Taxonomy
2007 · 20.228 Zit.
UPARSE: highly accurate OTU sequences from microbial amplicon reads
2013 · 16.925 Zit.