Image-to-Image Translation with Multi-Path Consistency Regularization

Authors: Jianxin Lin, Yingce Xia, Yijun Wang, Tao Qin, Zhibo Chen

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct various experiments to demonstrate the effectiveness of our proposed methods, including face-toface translation, paint-to-photo translation, and deraining/de-noising translation.
Researcher Affiliation Collaboration Jianxin Lin1 , Yingce Xia3 , Yijun Wang2 , Tao Qin3 and Zhibo Chen1 1CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System, University of Science and Technology of China 2University of Science and Technology of China 3Microsoft Research Asia
Pseudocode No The paper describes its framework and training processes using mathematical equations and textual explanations, but it does not include a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not provide any statement about releasing its source code or a link to a code repository.
Open Datasets Yes For multi-domain face-to-face translation, we use the Celeb A dataset [Liu et al., 2015]... For multi-domain paint-to-photo translation, we use the paintings and photographs collected by [Zhu et al., 2017]... We use the raining images and original images collected by [Fu et al., 2017; Yang et al., 2017].
Dataset Splits No The paper refers to 'training data' and a 'test set' but does not specify the exact percentages, sample counts, or methodology used for the train/validation/test dataset splits needed for reproduction.
Hardware Specification Yes All the models are trained on one NVIDIA K40 GPU for one day.
Software Dependencies No The paper mentions using 'Adam optimizer' but does not specify version numbers for any key software components, libraries, or frameworks (e.g., Python, PyTorch/TensorFlow, CUDA versions).
Experiment Setup Yes We use Adam optimizer [Kingma and Ba, 2014] with learning rate 0.0001 for the first 10 epochs and linearly decay the learning rate every 10 epochs. The α in Eqn. (4) and Eqn. (7) is set to 0.1, and β in Eqn. (7) is also set to 0.1.