Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Multi-hop Evidence Pursuit Meets the Web: Team Papelo at FEVER 2024
2
Zitationen
1
Autoren
2024
Jahr
Abstract
Separating disinformation from fact on the web has long challenged both the search and the reasoning powers of humans.We show that the reasoning power of large language models (LLMs) and the retrieval power of modern search engines can be combined to automate this process and explainably verify claims.We integrate LLMs and search under a multi-hop evidence pursuit strategy.This strategy generates an initial question based on an input claim using a sequence to sequence model, searches and formulates an answer to the question, and iteratively generates follow-up questions to pursue the evidence that is missing using an LLM.We demonstrate our system on the FEVER 2024 (AVeriTeC) shared task.Compared to a strategy of generating all the questions at once, our method obtains .045higher label accuracy and .155higher AVeriTeC score (evaluating the adequacy of the evidence).Through ablations, we show the importance of various design choices, such as the question generation method, medium-sized context, reasoning with one document at a time, adding metadata, paraphrasing, reducing the problem to two classes, and reconsidering the final verdict.Our submitted system achieves .510AVeriTeC score on the dev set and .477AVeriTec score on the test set.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.551 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.942 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.