Unsupervised Cross-Domain Image Generation

Authors: Yaniv Taigman, Adam Polyak, Lior Wolf

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The Domain Transfer Network (DTN) is evaluated in two application domains: digits and face images. In the first domain, we transfer images from the Street View House Number (SVHN) dataset of Netzer et al. (2011) to the domain of the MNIST dataset by Le Cun & Cortes (2010). In the face domain, we transfer a set of random and unlabeled face images to a space of emoji images. In both cases, the source and target domains differ considerably.
Researcher Affiliation Industry Yaniv Taigman, Adam Polyak & Lior Wolf Facebook AI Research Tel-Aviv, Israel {yaniv,adampolyak,wolf}@fb.com
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any links to open-source code or state that code will be released.
Open Datasets Yes For working with digits, we employ the extra training split of SVHN, which contains 531,131 images for two purposes: learning the function f and as an unsupervised training set s for the domain transfer method. The evaluation is done on the test split of SVHN, comprised of 26,032 images. The set t contains the test set of the MNIST dataset. For supporting quantitative evaluation, we have trained a classifier on the train set of the MNIST dataset, consisting of the same architecture as f.
Dataset Splits No While the paper mentions 'no further reduction of validation error was observed on LCONST' for hyperparameter tuning, it does not provide specific details about the validation dataset split (e.g., percentages, sample counts, or methodology for creating the split).
Hardware Specification No The paper does not explicitly describe the hardware (e.g., specific GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using 'Adam by Kingma & Ba (2016)' as the optimization algorithm and 'Radford et al. (2015)' for network architecture inspiration, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes In the digit experiments, the results were obtained with the tradeoff hyperparamemters α = β = 15. We did not observe a need to add a smoothness term and the weight of LTV was set to γ = 0. We set α = 100, β = 1, γ = 0.05 as the tradeoff hyperparameters within LG via validation.