Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
AttnGCG: Enhancing Jailbreaking Attacks on LLMs with Attention Manipulation
Authors: Zijun Wang, Haoqin Tu, Jieru Mei, Bingchen Zhao, Yisen Wang, Cihang Xie
TMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, Attn GCG shows consistent improvements in attack efficacy across diverse LLMs, achieving an average increase of 7% in the Llama-2 series and 10% in the Gemma series. Our strategy also demonstrates robust attack transferability against both unseen harmful goals and black-box LLMs like GPT-3.5 and GPT-4. Moreover, we note our attentionscore visualization is more interpretable, allowing us to gain better insights into how our targeted attention manipulation facilitates more effective jailbreaking. We release the code at https://github.com/UCSC-VLAA/Attn GCG-attack. |
| Researcher Affiliation | Academia | 1UC Santa Cruz 2Johns Hopkins University 3University of Edinburgh 4Peking University |
| Pseudocode | Yes | Algorithm 1: Attn GCG Algorithm 2: GCG Algorithm 3: Universal Prompt Optimization with Attn GCG |
| Open Source Code | Yes | We release the code at https://github.com/UCSC-VLAA/Attn GCG-attack. |
| Open Datasets | Yes | We employ the Adv Bench Harmful Behaviors benchmark (Zou et al., 2023) to assess the performance of jailbreak attacks. |
| Dataset Splits | Yes | We randomly sample 100 behaviors from this dataset for evaluation. Train ASR is computed on the same 25 harmful goals used during the optimization, and Test ASR is computed on 100 held-out harmful goals. |
| Hardware Specification | Yes | We report the average runtime on an NVIDIA A100 GPU for Llama-2-chat-7b. |
| Software Dependencies | No | The paper mentions various LLMs (e.g., LLaMA, Gemma, Mistral, GPT-3.5, GPT-4) and baselines (GCG, Auto DAN, ICA) but does not specify software dependencies like programming languages, libraries, or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow, Transformers library versions). |
| Experiment Setup | Yes | We train attacks using GCG and Attn GCG for 500 steps with consistent parameter settings for a fair comparison. For Auto DAN, we utilize its default implementation and parameters, which involve a total of 100 iterations for each behavior. Table 8: Hyper-parameters of GCG and Attn GCG in Section 3.2 and Section 3.3 includes: n_steps 500, batch_size 256, topk 128, target_weight(wt) 1, attention_weight(wa) 0 (for GCG) and values varying in Table 9 for Attn GCG. We set do_sample = False for open-source models referring to (Chao et al., 2023). And for closed-weight models, we set temperature = 0. |