Model Conversion via Differentially Private Data-Free Distillation
Authors: Bochao Liu, Pengju Wang, Shikun Li, Dan Zeng, Shiming Ge
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments clearly demonstrate that our approach significantly outperform other differentially private generative approaches. |
| Researcher Affiliation | Academia | Bochao Liu1,2 , Pengju Wang1,2 , Shikun Li1,2 , Dan Zeng3 , Shiming Ge1,2 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China 3School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China |
| Pseudocode | Yes | Algorithm 1 DPDFD |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is released. |
| Open Datasets | Yes | Datasets. We conduct our experiments on 7 image datasets, including MNIST [Le Cun et al., 1998], Fashion MNIST(FMNIST) [Xiao et al., 2017], CIFAR10 [Krizhevsky, 2009], Celeb A [Liu et al., 2015], Path MNIST [Yang et al., 2021], COVIDx and Image Net [Deng et al., 2009]. |
| Dataset Splits | Yes | We conduct our experiments on 7 image datasets, including MNIST [Le Cun et al., 1998], Fashion MNIST(FMNIST) [Xiao et al., 2017], CIFAR10 [Krizhevsky, 2009], Celeb A [Liu et al., 2015], Path MNIST [Yang et al., 2021], COVIDx and Image Net [Deng et al., 2009]. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | Algorithm 1 DPDFD Input: Training iterations T, loss function LT , LS, LG, noise scale σ, sample size B, learning rate γ, γs, γg, gradient norm bound C, a positive stability constant e |