MemoryBank: Enhancing Large Language Models with Long-Term Memory

Authors: Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment involves both qualitative analysis with realworld user dialogs and quantitative analysis with simulated dialogs. To evaluate the effectiveness of Memory Bank, we conduct evaluations covering both qualitative and quantitative analyses, where the former involves real-world user dialogs and the latter employs simulated dialogs.
Researcher Affiliation Academia 1 Sun Yat-Sen University 2 Harbin Institute of Technology 3 KTH Royal Institute of Technology
Pseudocode No The paper describes the components and mechanisms of Memory Bank but does not provide any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement or a direct link to the source code for the Memory Bank or Silicon Friend methodology described.
Open Datasets Yes A distinctive features of Silicon Friend is its tuning with 38k psychological conversations, collected from various online sources, which enables it to exhibit empathy, carefulness, and provide useful guidance, making it adept at handling emotionally charged dialogues. The initial stage of Silicon Friend s development involves tuning the LLMs using a dataset of 38k psychological dialogues. This data, parsed from online sources2, comprises a range of conversations that cover an array of emotional states and responses. Footnote 2: Psychological QA websites like https://www.xinli001.com/.
Dataset Splits No The paper describes the construction of a 'memory storage' for evaluation and probing questions, but it does not provide specific details on training, validation, or test dataset splits in percentages or sample counts for model reproduction.
Hardware Specification Yes We set Lo RA rank r as 128 and train the model for 3 epochs with an A100 GPU.
Software Dependencies No In practice, we use Lang Chain (Lang Chain Inc. 2022) for memory retrieval. ... In language-specific implementations of the open-source version of Silicon Friend, we use Mini LM (Wang et al. 2020) as the embedding model for English and Text2vec (Ming 2022) for Chinese. (No version numbers provided for these software dependencies).
Experiment Setup Yes The initial stage of Silicon Friend s development involves tuning the LLMs using a dataset of 38k psychological dialogues. ... To adapt LLMs to scenarios with limited computational resources, we utilize a computation-efficient tuning approach, known as the Low-Rank Adaptation (Lo RA) method (Hu et al. 2021). ... We set Lo RA rank r as 128 and train the model for 3 epochs with an A100 GPU.