Style-Guided and Disentangled Representation for Robust Image-to-Image Translation

Authors: Jaewoong Choi, Daeha Kim, Byung Cheol Song463-471

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments We chose Star GAN-v2 (Choi et al. 2020), MSGAN (Mao et al. 2019) and RGAN (Jolicoeur-Martineau 2018) as baselines. See Appendix 2 for network details. Quantitative and qualitative results are shown through intensive experiments on a total of three datasets including the two datasets used in (Choi et al. 2020).
Researcher Affiliation Academia Jaewoong Choi, Daeha Kim, Byung Cheol Song Department of Electrical and Computer Engineering, Inha University, Incheon 22212, South Korea chlwodnd500@naver.com, kdhht5022@gmail.com, bcsong@inha.ac.kr
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes 1Code can be found here https://github.com/jaewoong1/SRIT Style-guided-I2I-translation
Open Datasets Yes This section evaluates the proposed SRIT algorithm1 for three popular datasets, i.e., Celeb A-HQ (Karras et al. 2017), AFHQ (Choi et al. 2020), and Yosemite (summer and winter scenes) (Zhu et al. 2017).
Dataset Splits No FID is the average for the validation set and the translated images of each dataset, and the LPIPS is the average for 10 images translated from the same source image. While a validation set is mentioned, its size or how it was split from the full dataset is not specified.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU or CPU models, memory, specific cloud instances) used to run the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup Yes In training all datasets, the batch size was set to 8, and 100K iterations in the training phase and 5K iterations in the boosting phase were repeated. ... For Celeb A-HQ and Yosemite datasets, λcyc = 1, λsty = 1, λds = 1, and λNP MI = 0.1. In AFHQ, λcyc = 1, λsty = 1, λds = 2, and λNP MI = 0.1. Also, for Celeb A-HQ and AFHQ, ϵ = 0.8, and ϵ = 0.7 in Yosemite.