Unsupervised Image-to-Image Translation Using Domain-Specific Variational Information Bound

Authors: Hadi Kazemi, Sobhan Soleymani, Fariborz Taherkhani, Seyed Iranmanesh, Nasser Nasrabadi

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments aim to show that an interpretable representation can be learned by the domain-specific variational information bound. Visual results on translation task show how domain-specific code can alter the style of generated images in a new domain. We compare our method against baselines both qualitatively and quantitatively.
Researcher Affiliation Academia Hadi Kazemi hakazemi@mix.wvu.edu Sobhan Soleymani ssoleyma@mix.wvu.edu Fariborz Taherkhani fariborztaherkhani@gmail.com Seyed Mehdi Iranmanesh seiranmanesh@mix.wvu.edu Nasser M. Nasrabadi nasser.nasrabadi@mail.wvu.edu West Virginia University Morgantown, WV 26505
Pseudocode No The paper describes the framework's components and loss functions mathematically, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology or a link to a code repository.
Open Datasets Yes We use two datasets for qualitative comparison, edges handbags [36] and edges shoes [31]. Three other datasets, namely architectural labels photos from the CMP Facade database [28], and CUHK Face Sketch Dataset (CUFS) [27] are employed for more qualitative evaluation.
Dataset Splits No The paper mentions 'train', 'validation', and 'test' in the context of model stages and refers to using 'unpaired images', but it does not provide specific percentages, sample counts, or citations for dataset splits (e.g., 70% training, 15% validation, 15% test).
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, memory, or cloud resources) used to conduct the experiments.
Software Dependencies No The paper mentions the use of 'Adam optimizer [14]' but does not provide specific version numbers for any key software components, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We use Adam optimizer [14] for online optimization with the learning rate of 0.0002. For reconstruction loss in (3), we set λ1 = 10 and λ2 = λ3 = 1. The values of α2 and α3 in (12) are set to 1, and the α4 α1 = β = 1. Finally, regarding the kernel parameter σ in (6), as discussed in [35], MMD is fairly robust to this parameter selection, and using 2 dim is a practical value in most scenarios, where dim is the dimension of vx1.