https://arxiv.org/abs/2308.15022 Skip to main content Cornell University We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate arxiv logo > cs > arXiv:2308.15022 [ ] Help | Advanced Search [All fields ] Search arXiv logo Cornell University Logo [ ] GO quick links * Login * Help Pages * About Computer Science > Computation and Language arXiv:2308.15022 (cs) [Submitted on 29 Aug 2023] Title:Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models Authors:Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, Li Guo Download a PDF of the paper titled Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models, by Qingyue Wang and 6 other authors Download PDF Abstract: Most open-domain dialogue systems suffer from forgetting important information, especially in a long-term conversation. Existing works usually train the specific retriever or summarizer to obtain key information from the past, which is time-consuming and highly depends on the quality of labeled data. To alleviate this problem, we propose to recursively generate summaries/ memory using large language models (LLMs) to enhance long-term memory ability. Specifically, our method first stimulates LLMs to memorize small dialogue contexts and then recursively produce new memory using previous memory and following contexts. Finally, the LLM can easily generate a highly consistent response with the help of the latest memory. We evaluate our method using ChatGPT and text-davinci-003, and the experiments on the widely-used public dataset show that our method can generate more consistent responses in a long-context conversation. Notably, our method is a potential solution to enable the LLM to model the extremely long context. Code and scripts will be released later. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2308.15022 [cs.CL] (or arXiv:2308.15022v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2308.15022 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Qingyue Wang [view email] [v1] Tue, 29 Aug 2023 04:59:53 UTC (425 KB) Full-text links: Download: * Download a PDF of the paper titled Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models, by Qingyue Wang and 6 other authors PDF * Other formats (license) Current browse context: cs.CL < prev | next > new | recent | 2308 Change to browse by: cs cs.AI References & Citations * NASA ADS * Google Scholar * Semantic Scholar a export BibTeX citation Loading... BibTeX formatted citation x [loading... ] Data provided by: Bookmark BibSonomy logo Reddit logo (*) Bibliographic Tools Bibliographic and Citation Tools [ ] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [ ] Litmaps Toggle Litmaps (What is Litmaps?) [ ] scite.ai Toggle scite Smart Citations (What are Smart Citations?) ( ) Code, Data, Media Code, Data and Media Associated with this Article [ ] Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) [ ] DagsHub Toggle DagsHub (What is DagsHub?) [ ] Links to Code Toggle Papers with Code (What is Papers with Code?) [ ] ScienceCast Toggle ScienceCast (What is ScienceCast?) ( ) Demos Demos [ ] Replicate Toggle Replicate (What is Replicate?) [ ] Spaces Toggle Hugging Face Spaces (What is Spaces?) ( ) Related Papers Recommenders and Search Tools [ ] Link to Influence Flower Influence Flower (What are Influence Flowers?) [ ] Connected Papers Toggle Connected Papers (What is Connected Papers?) [ ] Core recommender toggle CORE Recommender (What is CORE?) * Author * Venue * Institution * Topic ( ) About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) * About * Help * Click here to contact arXiv Contact * Click here to subscribe Subscribe * Copyright * Privacy Policy * Web Accessibility Assistance * arXiv Operational Status Get status notifications via email or slack