Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs

Authors: Yue Wang, Qizhou Wang, Feng Liu, Wei Huang, Yali Du, Xiaojiang Du, Bo Han

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments across a variety of well-established unlearning benchmarks, including TOFU (Maini et al., 2024), WMDP (Li et al., 2024), and MUSE (Shi et al., 2024). The integration of our GRU with established baselines demonstrates its effectiveness, achieving powerful unlearning capabilities alongside enhanced retention reliability. These results underscore the generality and significant potential of our approach in effectively mitigating the trade-off between unlearning and retention.
Researcher Affiliation Academia 1TMLR Group, Department of Computer Science, Hong Kong Baptist University 2The University of Melbourne 3RIKEN Center for Advanced Intelligence Project 4King s College London 5Department of Electrical and Computer Engineering, Stevens Institute of Technology. Correspondence to: Bo Han <EMAIL>.
Pseudocode Yes Algorithm 1 GRU Framework
Open Source Code Yes Our code is available at https://github.com/tmlr-group/GRU.
Open Datasets Yes Our evaluations adopt three representative benchmarks: TOFU (Maini et al., 2024), WMDP (Li et al., 2024), and MUSE (Shi et al., 2024).
Dataset Splits Yes TOFU comprises 200 synthetic author profiles, totally 4,000 question-answer pairs. It covers different unlearning setups with varying proportions of data targeted to be unlearned, including 1%, 5%, or 10% of the profiles as unlearning sets.
Hardware Specification Yes All our experiments are conducted with a series of computation nodes powered by NVIDIA-A100-80GB GPUs and Intel(R) Xeon(R) Gold 6248R CPUs.
Software Dependencies Yes All our codes are implemented on Transformers version 4.42.4 and CUDA version 12.1.
Experiment Setup Yes We employ the Adam W optimizer (Loshchilov & Hutter, 2017) with the batch size of 32 and the learning rates 2 10 5 for Phi-1.5 and 1 10 5 for LLa MA2-7B-chat in TOFU; 1 10 5 in MUSE; 4 10 6 in WMDP. Furthermore, the training steps are set to 5 epochs for TOFU, 1 epoch for MUSE, and 20 steps for WMDP. For the hyperparameters within GRU, we employ grid search on validation data to identify their optimal values. The candidate values for γ include {0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99}, and that for τ are {0.001, 0.005, 0.01, 0.1, 1.0, 10, 100}.