https://arxiv.org/abs/2403.18802 Skip to main content Cornell University We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate arxiv logo > cs > arXiv:2403.18802 [ ] Help | Advanced Search [All fields ] Search arXiv logo Cornell University Logo [ ] GO quick links * Login * Help Pages * About Computer Science > Computation and Language arXiv:2403.18802 (cs) [Submitted on 27 Mar 2024] Title:Long-form factuality in large language models Authors:Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, Quoc V. Le View a PDF of the paper titled Long-form factuality in large language models, by Jerry Wei and Chengrun Yang and Xinying Song and Yifeng Lu and Nathan Hu and Dustin Tran and Daiyi Peng and Ruibo Liu and Da Huang and Cosmo Du and Quoc V. Le View PDF Abstract:Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be used as automated evaluators for long-form factuality through a method which we call Search-Augmented Factuality Evaluator (SAFE). SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results. Furthermore, we propose extending F1 score as an aggregated metric for long-form factuality. To do so, we balance the percentage of supported facts in a response (precision) with the percentage of provided facts relative to a hyperparameter representing a user's preferred response length (recall). Empirically, we demonstrate that LLM agents can achieve superhuman rating performance - on a set of ~16k individual facts, SAFE agrees with crowdsourced human annotators 72% of the time, and on a random subset of 100 disagreement cases, SAFE wins 76% of the time. At the same time, SAFE is more than 20 times cheaper than human annotators. We also benchmark thirteen language models on LongFact across four model families (Gemini, GPT, Claude, and PaLM-2), finding that larger language models generally achieve better long-form factuality. LongFact, SAFE, and all experimental code are available at this https URL. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2403.18802 [cs.CL] (or arXiv:2403.18802v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2403.18802 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Jerry Wei [view email] [v1] Wed, 27 Mar 2024 17:48:55 UTC (236 KB) Full-text links: Access Paper: View a PDF of the paper titled Long-form factuality in large language models, by Jerry Wei and Chengrun Yang and Xinying Song and Yifeng Lu and Nathan Hu and Dustin Tran and Daiyi Peng and Ruibo Liu and Da Huang and Cosmo Du and Quoc V. Le * View PDF * TeX Source * Other Formats license icon view license Current browse context: cs.CL < prev | next > new | recent | 2403 Change to browse by: cs cs.AI cs.LG References & Citations * NASA ADS * Google Scholar * Semantic Scholar a export BibTeX citation Loading... BibTeX formatted citation x [loading... ] Data provided by: Bookmark BibSonomy logo Reddit logo (*) Bibliographic Tools Bibliographic and Citation Tools [ ] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [ ] Litmaps Toggle Litmaps (What is Litmaps?) [ ] scite.ai Toggle scite Smart Citations (What are Smart Citations?) ( ) Code, Data, Media Code, Data and Media Associated with this Article [ ] Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) [ ] DagsHub Toggle DagsHub (What is DagsHub?) [ ] GotitPub Toggle Gotit.pub (What is GotitPub?) [ ] Links to Code Toggle Papers with Code (What is Papers with Code?) [ ] ScienceCast Toggle ScienceCast (What is ScienceCast?) ( ) Demos Demos [ ] Replicate Toggle Replicate (What is Replicate?) [ ] Spaces Toggle Hugging Face Spaces (What is Spaces?) [ ] Spaces Toggle TXYZ.AI (What is TXYZ.AI?) ( ) Related Papers Recommenders and Search Tools [ ] Link to Influence Flower Influence Flower (What are Influence Flowers?) [ ] Connected Papers Toggle Connected Papers (What is Connected Papers?) [ ] Core recommender toggle CORE Recommender (What is CORE?) * Author * Venue * Institution * Topic ( ) About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) * About * Help * Click here to contact arXiv Contact * Click here to subscribe Subscribe * Copyright * Privacy Policy * Web Accessibility Assistance * arXiv Operational Status Get status notifications via email or slack