Dynamically Masked Discriminator for GANs
Authors: Wentian Zhang, Haozhe Liu, Bing Li, Jinheng Xie, Yawen Huang, Yuexiang Li, Yefeng Zheng, Bernard Ghanem
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments |
| Researcher Affiliation | Collaboration | Wentian Zhang1,2 Haozhe Liu1,2 Bing Li1B Jinheng Xie3 Yawen Huang2 Yuexiang Li2,4B Yefeng Zheng2 Bernard Ghanem1 1 AI Initiative, King Abdullah University of Science and Technology 2 Jarvis Research Center, Tencent You Tu Lab 3 National University of Singapore 4 Life Sciences Institute, Guangxi Medical University |
| Pseudocode | Yes | the pseudo-code is given in Algorithm. 1. |
| Open Source Code | Yes | Code is available at https://github.com/WentianZhang-ML/DMD. |
| Open Datasets | Yes | Dataset. We evaluate the performance of our method on six widely used datasets, including AFHQCat [12], AFHQ-Dog [12], AFHQ-Wild [12], FFHQ [35], and LSUN-Church [90] with 256 256 resolutions and CIFAR-10 [39] with 32 32 resolutions. |
| Dataset Splits | No | The paper mentions datasets used for training and testing samples for evaluation metrics, but it does not specify explicit training, validation, and test dataset splits (e.g., percentages or counts) from the original datasets. |
| Hardware Specification | Yes | Our models were trained on a workstation equipped with 8 NVIDIA Tesla V100 GPUs with 32 GB memory, a CPU of 2.8GHz, and 512GB RAM. |
| Software Dependencies | No | The paper mentions using 'Py Torch [59] public platform' but does not specify its version number or any other software dependencies with version details. |
| Experiment Setup | Yes | We maintain consistency with ADA [33], by using identical network architectures [36], weight demodulation [36], style mixing regularization [35], path length regularization, lazy regularization [36], equalized learning rate for all trainable parameters [32], non-saturating logistic loss [19] with R1 regularization [54], and the Adam optimizer [37]. |