Adaptive Machine Unlearning
Authors: Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 6, we complement our main result with a set of experimental results on CIFAR-10, MNIST, and Fashion-MNIST that demonstrate differential privacy may be useful in giving adaptive guarantees beyond the statement of our theorems. |
| Researcher Affiliation | Academia | Varun Gupta1, Christopher Jung1, Seth Neel2, Aaron Roth1, Saeed Sharifi-Malvajerdi1, and Chris Waites3 1University of Pennsylvania 2Harvard University 3Stanford University |
| Pseudocode | Yes | The paper contains "Algorithm 1: Interaction between (A, RA) and Upd Req", "Algorithm 2: Adistr: Distributed Learning Algorithm", and "Algorithm 3: RAdistr: Distributed Unlearning Algorithm: t th round of unlearning". |
| Open Source Code | Yes | The code for our experiments can be found at https://github.com/Chris Waites/adaptive-machine-unlearning. |
| Open Datasets | Yes | Experimental results on CIFAR-10 [Krizhevsky and Hinton, 2009], MNIST [Lecun et al., 1998], and Fashion-MNIST [Xiao et al., 2017] |
| Dataset Splits | No | The paper mentions using CIFAR-10, MNIST, and Fashion-MNIST but does not explicitly provide details about the train/validation/test splits, such as percentages, sample counts, or specific splitting methodology. It only refers to test sets in general. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions "JAX" and "DP-SGD" but does not provide specific version numbers for these or any other key software dependencies used in their experiments. |
| Experiment Setup | Yes | Full experimental details can be found in the appendix. (Appendix A. Experimental Details provides network architecture, batch size 128, ADAM optimizer, learning rate 1e-3, 50 epochs, and k shards for SISA framework.) |