Deliberation Learning for Image-to-Image Translation
Authors: Tianyu He, Yingce Xia, Jianxin Lin, Xu Tan, Di He, Tao Qin, Zhibo Chen
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify our proposed method on four two-domain translation tasks and one multi-domain translation task. Both the qualitative and quantitative results demonstrate the effectiveness of our method. |
| Researcher Affiliation | Collaboration | 1CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System, University of Science and Technology of China 2Microsoft Research Asia 3Key Laboratory of Machine Perception, MOE, School of EECS, Peking University |
| Pseudocode | No | The paper describes the framework with equations and textual steps for the training process but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions implementing based on official CycleGAN and StarGAN code (with corresponding footnotes pointing to their GitHub repositories), but does not explicitly state that *their* deliberation learning code is open-source or provide a link for it. |
| Open Datasets | Yes | Tasks. We select four tasks evaluated in Cycle GAN [Zhu et al., 2017]: semantic Label Photo translation on Cityscapes dataset [Cordts et al., 2016], Apple Orange translation, Winter Summer translation, and Photo Paint translation. We used the publicly available Celeb A dataset [Liu et al., 2015] for facial attributes translation. |
| Dataset Splits | No | The paper states for CelebA that 'The test set is randomly sampled (2, 000 images) and the remaining images are used for training.', but does not explicitly provide details about a validation set split for any of the datasets used. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions using PyTorch for implementation but does not specify the version number or any other software dependencies with their respective versions. |
| Experiment Setup | Yes | Implementation details. We use Adam with initial learning rate 2 10 4 to train the models for the first 100 epochs. Then we linearly decay the learning rate to 0 in the next 100 epochs. For multi-domain, We use Adam with initial learning rate 1 10 4 to train the models for the first 100, 000 iterations. Then we linearly decay the learning rate to 0 in the next 100, 000 iterations. The batch size is set to 16. |