Unified Gradient-Based Machine Unlearning with Remain Geometry Enhancement

Authors: Zhehao Huang, Xinwen Cheng, JingHao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate our efficacy and efficiency. Notably, our method successfully performs class-forgetting on Image Net using Di T and forgets a class on CIFAR-10 using DDPM in just 50 steps, compared to thousands of steps required by previous methods.
Researcher Affiliation Academia Zhehao Huang, Xinwen Cheng, Jing Hao Zheng, Haoran Wang, Zhengbao He, Tao Li, Xiaolin Huang Shanghai Jiao Tong University [kinght_H, xinwencheng, zjh20030406, haoran_whynot, lstefanie, li.tao, xiaolinhuang]@sjtu.edu.cn
Pseudocode Yes Appendix C Algorithm A1 The Algorithm of Proposed SFR-on
Open Source Code Yes Code is available at Unified-Unlearning-w-Remain-Geometry.
Open Datasets Yes In image classification, we primarily focus on the random subset unlearning task. Evaluations are conducted using Res Net-18 [52] on CIFAR10 [53] and Swin T [54] on Tiny Image Net [55], with additional tests on random subset and class-wise forgetting tasks involving CIFAR100 [53] and SVHN [56], detailed in Appendix F.2. ... Moreover, for the first time, we explore the latent diffusion model [42] equipped with Diffusion Transformer (Di T) [58] on Image Net [59]... Given that SD V1.4 is trained on the LAION dataset [63]...
Dataset Splits No The paper defines forgetting and remaining datasets (D_f, D_r) and mentions a test dataset (D_t), but does not explicitly provide information on a distinct validation dataset split or its proportions.
Hardware Specification Yes Experiments are run on 1 RTX 4090. ... Experiments are run on 2 RTX 4090s.
Software Dependencies No The paper mentions specific models and optimizers like 'Res Net-18', 'Swin T', 'DDPM', 'UNet', 'Di T', and 'Adam W optimizer', and refers to 'torchvision', but does not provide specific version numbers for these software components or programming languages.
Experiment Setup Yes Our SFR-on train 1500 steps with the constant outer loop learning rate of α = 1.0, inner loop iteration number Tin = 5. SFR-on search inner loop learning rate for forgetting in range [0.1, 0.5] and for remaining in range [10 3, 10 2], temperature scalar λ in range [0.0, 2.0], and threshold γ in list [0.3, 1.0, 3.0, 10.0]. Experiments are run on 1 RTX 4090. A summary of the hyperparameters for each method is shown in Tab. A1.