Distilling Portable Generative Adversarial Networks for Image Translation
Authors: Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu3585-3592
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Qualitative and quantitative analysis by conducting experiments on benchmark datasets demonstrate that the proposed method can learn portable generative models with strong performance. |
| Researcher Affiliation | Collaboration | 1Key Lab of Machine Perception (MOE), CMIC, School of EECS, Peking University, China, 2Huawei Noah s Ark Lab, 3Huawei Consumer Business Group, 4National Engineering Laboratory for Video Technology, Peking University, 5Peng Cheng Laboratory, 6School of Computer Science, Faculty of Engineering, The University of Sydney, Australia |
| Pseudocode | Yes | Algorithm 1 Portable GAN learning via distillation. |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | Yes | We first conducted the semantic label photo task on Cityscapes dataset (Cordts et al. 2016) using pix2pix. We evaluate two datasets for Cycle GAN: horse zebra and label photo. |
| Dataset Splits | Yes | The dataset is divided into about 3,000 training images, 500 validation images and about 1,500 test images, which are all paired data. |
| Hardware Specification | No | The paper mentions 'PCs with modern GPUs' and discusses 'enormous computational resources' and 'heavy computation and storage cost' but does not provide specific hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions software components like 'Unet', 'Patch GANs', 'Adam solver', and 'FCN-8s' but does not specify version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | The hyper-parameter λ in Fcn. (3) is set to 1. For the discriminator networks, we use 70 70 Patch GANs... When optimizing the networks, the objective value is divided by 2 while optimizing D. The networks are trained for 200 epochs using the Adam solver with the learning rate of 0.0002. When testing the GANs, the generator was run in the same manner as training but without dropout. |