Fast Federated Machine Unlearning with Nonlinear Functional Theory
Authors: Tianshi Che, Yang Zhou, Zijie Zhang, Lingjuan Lyu, Ji Liu, Da Yan, Dejing Dou, Jun Huan
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation on real datasets demonstrates the superior performance of our FMU model against several state-of-the-art techniques. |
| Researcher Affiliation | Collaboration | 1Auburn University, USA 2Sony AI, Japan 3Baidu Research, China 4University of Alabama at Birmingham, USA 5Boston Consulting Group, USA 6AWS AI Labs, USA. |
| Pseudocode | No | The paper describes the FFMU model training process in text (e.g., 'On the device side, a local ML model... is trained...'), but it does not include a formally labeled pseudocode block or algorithm. |
| Open Source Code | No | We promise to release our open-source codes on Git Hub and maintain a project website with detailed documentation for long-term access by other researchers and end-users after the paper is accepted. |
| Open Datasets | Yes | We study image classification networks on three standard image datasets: Fashion-MNIST 1, CIFAR-10 2, and SVHN 3. The above three image datasets are all public datasets, which allow researchers to use for non-commercial research and educational purposes. |
| Dataset Splits | No | The paper specifies training and test data sizes for Fashion-MNIST, CIFAR-10, and SVHN (e.g., 'We use 60,000 examples as training data and 10,000 examples as test data for Fashion-MNIST'), but it does not provide explicit numerical splits for a separate validation set. |
| Hardware Specification | Yes | The experiments were conducted on a compute server running on Red Hat Enterprise Linux 7.2 with 2 CPUs of Intel Xeon E5-2650 v4 (at 2.66 GHz) and 8 GPUs of NVIDIA Ge Force GTX 2080 Ti (with 11GB of GDDR6 on a 352-bit memory bus and memory bandwidth in the neighborhood of 620GB/s), 256GB of RAM, and 1TB of HDD. |
| Software Dependencies | Yes | The codes were implemented in Python 3.7.3 and Py Torch 1.0.14. We also employ Numpy 1.16.4 and Scipy 1.3.0 in the implementation. |
| Experiment Setup | Yes | The neural networks are trained with Kaiming initialization (He et al., 2015) using SGD for 120 epochs with an initial learning rate of 0.05 and batch size 500. The learning rate is decayed by a factor of 0.1 at 1/2 and 3/4 of the total number of epochs. Also, 'For our FFMU model, we performed hyperparameter selection by performing a parameter sweep on standard deviation σ {0.025, 0.05, 0.1, 0.2, 0.3, 0.5, 1} in the Gaussian distribution, quantization threshold λ {σ2/4, σ2/2, σ2, 2σ2, 4σ2}, ratio of data removals {5%, 8%, 10%, 15%, 20%}, local epochs of the machine unlearning model {1, 2, 3, 4, 5}, global epochs of the machine unlearning model {40, 80, 120, 160, 200}, batch size for training the model {30, 40, 50, 60, 70}, and learning rate {0.04, 0.06, 0.08, 0.1, 0.12}.' |