Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection

Authors: Bingzhe Wu, Shiwan Zhao, Chaochao Chen, Haoyang Xu, Li Wang, Xiaolu Zhang, Guangyu Sun, Jun Zhou

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitatively, to evaluate the information leakage of well-trained GAN models, we perform various membership attacks on these models. The results show that previous Lipschitz regularization techniques are effective in not only reducing the generalization gap but also alleviating the information leakage of the training dataset.
Researcher Affiliation Collaboration 1Peking University, 2IBM Research, 3Ant Financial, 4 Advanced Institute of Information Technology, Peking University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks with labels such as “Algorithm” or “Pseudocode”.
Open Source Code No The paper does not contain any statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes We conduct experiments on a face image dataset and a real clinical dataset, namely, Labeled Faces in the Wild (LFW) [20] which consists of 13,233 face images, and the IDC dataset which is publicly available for invasive ductal carcinoma (IDC) classification3
Dataset Splits No The paper describes training and testing dataset splits for LFW and IDC datasets but does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using “Adam [18]” for optimization but does not provide specific software dependencies like library names with version numbers (e.g., PyTorch version, TensorFlow version, CUDA version).
Experiment Setup Yes As for optimization, we use Adam [18] in all experiments, and use different hyper-parameters for different training strategies. To be specific, we make use of Adam for the GAN trained with JS divergence. The learning rates is set to 0.0004 for the GAN trained without any regularization terms (original GAN [12]), while for other GANs (e.g. trained using Wasserstein distance), the learning rate is set to 0.0002. More details of hyper-parameter settings (e.g. β in Adam) can be found in Appendix. We trained all the models for 400 epochs on both datasets.