ARCANE: An Efficient Architecture for Exact Machine Unlearning

Authors: Haonan Yan, Xiaoguang Li, Ziyao Guo, Hui Li, Fenghua Li, Xiaodong Lin

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate ARCANE on three typical datasets with three common model architectures. Experiment results show the effectiveness and superiority of ARCANE over both the naive retraining and the state-of-the-art method in terms of model performance and unlearning speed.
Researcher Affiliation Academia 1State Key Laboratory of ISN, School of Cyber Engineering, Xidian University, China 2State Key Laboratory of Information Security, Institute of Information Engineering, China 3School of Cyber Security, University of Chinese Academy of Sciences, China 4School of Computer Science, University of Guelph, Canada
Pseudocode No The paper describes the steps of its architecture and model (e.g., partitioning, data selection, training state saving) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes The datasets used in this work are simple MNIST, moderate CIFAR-10, and complex Imagenet.
Dataset Splits No The paper explicitly mentions training and test datasets but does not describe the use of a separate validation set or its specific split details.
Hardware Specification Yes We run all experiments on an Ubuntu 16.04.1 LTS 64 bit server, with one Intel(R) Xeon(R) Silver 4214 CPU, 128GB RAM, and one RTX 2080 Ti GPU with 11GB dedicated memory.
Software Dependencies No The paper mentions the operating system (Ubuntu 16.04.1 LTS) but does not specify version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used.
Experiment Setup Yes The number of epochs is 50. For the task of MNIST, we set the batch size to 256. For the task of CIFAR-10 and Imagenet, the batch size is 64.