Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Fine-Grained and Efficient Self-Unlearning with Layered Iteration
Authors: Hongyi Lyu, Xuyun Zhang, Hongsheng Hu, Shuo Wang, Chaoxiang He, Lianyong Qi
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on three benchmark datasets demonstrate that SULI achieves superior performance in effectiveness, efficiency, and privacy compared to the state-of-the-art baselines in both class-wise and instance-wise unlearning scenarios. |
| Researcher Affiliation | Academia | Hongyi Lyu1 , Xuyun Zhang1 , Hongsheng Hu2 , Shuo Wang3 , Chaoxiang He3 , Lianyong Qi4 1Macquarie University 2University of Newcastle 3Shanghai Jiao Tong University 4China University of Petroleum (East China) |
| Pseudocode | Yes | Algorithm 1 Self-Unlearning with Layered Iteration (SULI) |
| Open Source Code | Yes | The source code is released at https://github.com/Hongyi-Lyu-MQ/SULI. |
| Open Datasets | Yes | Datasets. We follow the previous works [Chen et al., 2023; Cha et al., 2024] and use three datasets: CIFAR-10 [Krizhevsky, 2009], VGGFace2 [Cao et al., 2018], and UTKFace [Zhang et al., 2017]. |
| Dataset Splits | No | The paper defines Dtrain, Df (forgetting dataset), and Dr (retaining set) for the unlearning task but does not explicitly provide the train/test/validation splits for the datasets (CIFAR-10, VGGFace2, UTKFace) used for initial model training in the main text. It mentions details are in Appendix B and C, which are not provided. |
| Hardware Specification | Yes | Our experimental environment includes an NVIDIA RTX 4070 GPU, Python 3.11, and Py Torch 2.1.1. |
| Software Dependencies | Yes | Our experimental environment includes an NVIDIA RTX 4070 GPU, Python 3.11, and Py Torch 2.1.1. |
| Experiment Setup | Yes | We utilize the ADAM optimizer [Kingma and Ba, 2014] with carefully selected learning rates optimized for both class-wise and instance-wise unlearning tasks. ... We perform a grid search (the results are shown in appendix D) to optimize the hyperparameter t within the range [1, 25], selecting t = 2 for all experiments as it balances model utility and unlearning effectiveness. Our experiments cover two primary unlearning scenarios: class-wise unlearning, where early stops when the model s accuracy on Df approaches zero, and instance-wise unlearning, where unlearning ceases when the model s accuracy on Df matches that on a 1% reference dataset. |