Towards Certified Unlearning for Deep Neural Networks
Authors: Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three real-world datasets demonstrate the efficacy of our method and the advantages of certified unlearning in DNNs. Finally, we conduct extensive experiments including ablation studies to verify the effectiveness of our certified unlearning for DNNs in practice. |
| Researcher Affiliation | Academia | Binchi Zhang 1 Yushun Dong 1 Tianhao Wang 1 Jundong Li 1 1University of Virginia, Charlottesville, VA, USA. Correspondence to: Jundong Li <jundong@virginia.edu>. |
| Pseudocode | Yes | Algorithm 1 Single-Batch Certified Unlearning for DNNs |
| Open Source Code | Yes | Our code is available at https://github.com/zhangbinchi/ certified-deep-unlearning. |
| Open Datasets | Yes | We conduct experiments based on three widely adopted real-world datasets for image classification, MNIST (Le Cun et al., 1998), SVHN (Netzer et al., 2011), and CIFAR10 (Krizhevsky et al., 2009) to evaluate certified unlearning for DNNs. |
| Dataset Splits | No | The paper provides training and test set sizes (e.g., '60,000 handwritten digit images for training and 10,000 images for testing' for MNIST), but does not explicitly detail a separate validation dataset split. |
| Hardware Specification | Yes | All experiments are implemented on an Nvidia RTX A6000 GPU. We ran all experiments on an Nvidia RTX A6000 GPU. |
| Software Dependencies | Yes | python == 3.9.16 torch == 1.12.1 torchvision == 0.13.1 numpy == 1.24.3 scikit-learn == 1.2.2 scipy == 1.10.1 |
| Experiment Setup | Yes | For the training of original models, we exploited Adam as the optimizer. We set the learning rate as 1e-3, the weight decay parameter as 5e-4, and the training epochs number as 50. We ran all experiments on an Nvidia RTX A6000 GPU. All experiments are conducted based on three real-world datasets: MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky et al., 2009), and SVHN (Netzer et al., 2011). All datasets are publicly accessible (MNIST with GNU General Public License, CIFAR-10 with MIT License, and SVHN with CC BY-NC License). We reported the average value and the standard deviation of the numerical results under three different random seeds. For the relearn time in Table 2, we directly report the rounded mean value without the standard deviation as the value of the epoch number is supposed to be an integer. The unlearned data is selected randomly from the training set. Detailed hyperparameter settings of the original models are presented in Table 4. |