Machine Unlearning via Null Space Calibration

Authors: Huiqiang Chen, Tianqing Zhu, Xin Yu, Wanlei Zhou

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare UNSC with existing methods on the standard Fashion MNIST [Xiao et al., 2017], CIFAR-10, CIFAR-100 [Krizhevsky et al., 2009], and SVHN [Netzer et al., 2011] benchmarks. We use Alex Net [Krizhevsky et al., 2012] for Fashion MNIST, VGG-11 [Simonyan and Zisserman, 2015] for SVHN, All CNN [Springenberg et al., 2015] for CIFAR-10, and Res Net-18 [He et al., 2015] for CIFAR-100. Utility guarantee is evaluated by accuracy on the remaining testing data Acc Drt and accuracy on the unlearning testing data Acc Dut. Table 1 displays the accuracy results of models obtained by different unlearning methods.
Researcher Affiliation Academia Huiqiang Chen1 , Tianqing Zhu2 , Xin Yu3 , Wanlei Zhou2 1University of Technology Sydney, NSW, Australia 2City University of Macau, Macau, China 3University of Queensland, QLD, Australia huiqiang.chen@student.uts.edu.au, {tqzhu, wlzhou}@cityu.edu.mo, xin.yu@uq.edu.au
Pseudocode Yes Algorithm 1 Find layer-wise subspace of class k; Algorithm 2 Unlearning via null space calibration
Open Source Code Yes Code released at https://github.com/HQC-ML/UNSC
Open Datasets Yes We compare UNSC with existing methods on the standard Fashion MNIST [Xiao et al., 2017], CIFAR-10, CIFAR-100 [Krizhevsky et al., 2009], and SVHN [Netzer et al., 2011] benchmarks.
Dataset Splits Yes The training set is divided into 90% for training and 10% for validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions using 'SGD optimizer' but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes We train the original and retrained models for 200 epochs and stop the training based on validation accuracy. The patience is set to 30. We use SGD optimizer in all experiments, starting with a learning rate of 0.1, and reducing it by 0.2 at epochs 60, 120, and 160. In the unlearning stage, we determine the best learning rate and number of epochs for different datasets and networks.