Hierarchical Modes Exploring in Generative Adversarial Networks
Authors: Mengxiao Hu, Jinlong Li, Maolin Hu, Tao Hu10981-10988
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validated the proposed algorithm on four conditional image synthesis tasks including categorical generation, paired and un-paired image translation and text-to-image generation. Both qualitative and quantitative results show that the proposed method is effective in alleviating the mode collapse problem in c GANs, and can control the diversity of output images w.r.t specific-level features. |
| Researcher Affiliation | Academia | Mengxiao Hu, Jinlong Li, Maolin Hu, Tao Hu University of Science and Technology of China m x hu@126.com, jlli@ustc.edu.cn, {humaolin, Skyful}@mail.ustc.edu.cn |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link for open-source code availability. |
| Open Datasets | Yes | Categorical generation, it is trained on CIFAR-10 (Szegedy et al. 2015)... Paired image-to-image translation, it is trained on facades and maps using Pix2Pix as the baseline model. Unpaired image-to-image translation, it is trained on Yosemite (Zhu et al. 2017a) and cat dog (Lee et al. 2018)... Text-to-image generation, it is trained on CUB-200-2011 (Wah et al. 2011)... |
| Dataset Splits | No | The paper does not explicitly provide specific percentages or sample counts for training, validation, and test splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | Because the original networks of the baseline model do not change after adding the attention unit and the regularization term, we kept the hyper-parameters of the baseline model original. We adopted L1 norm as distance metrics for all d(i)( ) and set the weight of regularization β = 1 in all experiments. |