Fast Machine Unlearning without Retraining through Selective Synaptic Dampening
Authors: Jack Foster, Stefan Schoepf, Alexandra Brintrup
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method against several existing unlearning methods in a range of experiments using Res Net18 and Vision Transformer. Results show that the performance of SSD is competitive with retrain-based post hoc methods, demonstrating the viability of retrain-free post hoc unlearning approaches. |
| Researcher Affiliation | Academia | 1University of Cambridge, Department of Engineering 2The Alan Turing Institute {jwf40, ss2823, ab702}@cam.ac.uk |
| Pseudocode | Yes | Algorithm 1: Selective Synaptic Dampening Input: ϕθ, D, Df; optional to skip 1.: []D Parameter: α, λ Output: ϕθ 1: Calculate and store []D once. Discard D. 2: Calculate []Df 3: for i in range |θ| do 4: if []Df ,i > α[]D,i then 5: θ i = min( λ[]D,i []Df,i θi, θi) 6: end if 7: end for 8: return ϕθ |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We evaluate our method on image classification using CIFAR10, CIFAR20, and CIFAR100 (Krizhevsky and Hinton 2010), in line with Golatkar, Achille, and Soatto (2020a); Chundawat et al. (2023a). ... We substitute it with the Pins Face Recognition dataset (Burak 2020), which consists of 17,534 faces of 105 celebrities collected from Pinterest. |
| Dataset Splits | No | The paper mentions that models are trained with early stopping, which implies the use of a validation set, but it does not provide specific details on the dataset splits (e.g., percentages or sample counts) used for training, validation, or testing. |
| Hardware Specification | Yes | Experiments were performed on NVIDIA RTX4090 with Intel Xeon processors. |
| Software Dependencies | No | The paper mentions Python 3, PyTorch, and Ubuntu 20.04.6 LTS. While Ubuntu's version is specific, Python and PyTorch are mentioned without specific version numbers (e.g., Python 3.x, PyTorch 1.x), which are required for full reproducibility of software dependencies. |
| Experiment Setup | Yes | Models are trained with early stopping using a multi-step learning rate scheduler beginning at lr = 0.1 and the Adam optimiser (Kingma and Ba 2014)... We found hyper-parameters using 50 runs of the TPE search from Optuna (Akiba et al. 2019), for values α [0.1, 100]) and λ [0.1, 5]. ... We use λ=1 and α=10 for all Res Net18 CIFAR tasks. For Pins Face Recognition, we use α=50 and λ=0.1 due to the much greater similarity between classes. Vi T also uses λ=1 on all CIFAR tasks. We change α=10 to α=5 for slightly improved performance on class and α=25 on sub-class unlearning. |