DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy

Authors: Anda Cheng, Jiaxing Wang, Xi Sheryl Zhang, Qiang Chen, Peisong Wang, Jian Cheng6358-6366

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically certify the effectiveness of the proposed framework. The searched model DPNASNet achieves state-of-the-art privacy/utility trade-offs, e.g., for the privacy budget of (ϵ, δ) = (3, 1 10 5), our model obtains test accuracy of 98.57% on MNIST, 88.09% on Fashion MNIST, and 68.33% on CIFAR-10. Furthermore, by studying the generated architectures, we provide several intriguing findings of designing private-learning-friendly DNNs, which can shed new light on model design for deep learning with differential privacy.
Researcher Affiliation Collaboration Anda Cheng1,2, Jiaxing Wang3, Xi Sheryl Zhang1,4, Qiang Chen1, Peisong Wang1, Jian Cheng1,2,4* 1Institute of Automation, Chinese Academy of Sciences, 2School of Artificial Intelligence, University of Chinese Academy of Sciences, 3JD.com, 4AIRIA
Pseudocode Yes Algorithm 1: Search Process of DPNAS
Open Source Code No The paper states: "We implement the search process and private training by PyTorch (Paszke et al. 2019) with opacus package." However, it does not explicitly state that the authors' own code for the described methodology is publicly available, nor does it provide a link to a code repository.
Open Datasets Yes We run DPNAS on MNIST, Fashion MNIST, and CIFAR-10. We split the training data of each dataset into the training set and validation set with the ratio of 0.6 : 0.4.
Dataset Splits Yes We run DPNAS on MNIST, Fashion MNIST, and CIFAR-10. We split the training data of each dataset into the training set and validation set with the ratio of 0.6 : 0.4.
Hardware Specification Yes All experiments are conducted on a NVIDIA Titan RTX GPU with 24GB of RAM.
Software Dependencies No The paper mentions: "We implement the search process and private training by PyTorch (Paszke et al. 2019) with opacus package." While PyTorch and opacus are mentioned, specific version numbers for these software packages are not provided, which is necessary for reproducibility.
Experiment Setup Yes The sampled architectures are trained with DPSGD with weight decay 2e-4, and moment 0.9. The batch size is set to 300 and the learning rate is set to 0.02. The RNN controller used in our search process is the same as the RNN controller used in (Pham et al. 2018). It is trained with Adam optimizer (Kingma and Ba 2014). The batch size is set to 64 and the learning rate is set to 3e-4. The trade-off weight for controller entropy in the reward is set to 0.05. Our search process runs for 100 epochs.