Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Leveraging Attention to Effectively Compress Prompts for Long-Context LLMs
Authors: Yunlong Zhao, Haoran Wu, Bo Xu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on datasets for retrieval-augmented generation and multiple long tasks involving single or multidocument QA. Our proposed method, Attn Comp, outperforms previous baselines and validates the contributions of our components through analytical experiments. In this section, we describe the experiments conducted to evaluate the effectiveness of our proposed approach. |
| Researcher Affiliation | Academia | 1The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3Nanjing Artificial Intelligence Research of IA, Nanjing, China EMAIL |
| Pseudocode | Yes | Algorithm 1: Pseudo code of our Attn Comp |
| Open Source Code | No | No explicit statement or link for open-source code release is provided. The paper mentions how the approach was implemented using existing libraries but not the release of their own code. |
| Open Datasets | Yes | We use the Natural Question dataset (Kwiatkowski et al. 2019) and datasets from Long Bench (Bai et al. 2023), respectively. |
| Dataset Splits | Yes | We use the Natural Question dataset (Kwiatkowski et al. 2019) and datasets from Long Bench (Bai et al. 2023), respectively. ... We utilize the benchmark s provided metrics and scripts for our evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments. It only mentions the LLMs used (e.g., GPT-3.5-Turbo, Long Chat, Llama-2). |
| Software Dependencies | No | The paper mentions using PyTorch, Huggingface's Transformers, C++, and the Python package community louvain, but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Taking into account the latency and Llama2 s 4k window limitation, we set the compression algorithm s processing window to 2k for extremely long prompts. Data exceeding this window is chunked and processed over multiple passes. ... For the NQ dataset, we initially achieve a compression ratio of over 4x through coarse-grained compression, followed by fine-grained adjustments to satisfy the constraints. Similarly, for the Long Bench dataset, consistent with previous work, we set a target compression limit of 2,000 tokens. We first apply coarse-grained compression to reduce the length to 4,000 tokens and then use a 50% fine-grained filtering process to meet the final compression requirement. ... we first identify the retrieval heads within the LLM that are most relevant to contextual information and select the top 20 to derive the compression metric. |