Masked Based Unsupervised Content Transfer
Authors: Ron Mokady, Sagie Benaim, Lior Wolf, Amit Bermano
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method for guided content transfer, out of domain manipulation, attribute removal, sequential content transfer, sequential attribute removal and content addition, and weakly supervised segmentation of the domain specific content. To assess the quality of the domain translation, we conduct a handful of quantitative evaluations. In Tab. 1, we consider the Frechet Inception Distance (FID) Heusel et al. (2017) and Kernel Inception Distance (KID) Bi nkowski et al. (2018) scores of images with the common part of a and separate part of b over a test set of images from domains A and B. The FID score is a commonly used metric to evaluate the quality and diversity of produced images; KID is a recently proposed alternative for FID. We note that these values should only be used comparatively, as the size of the test set used affects the score magnitude. |
| Researcher Affiliation | Collaboration | Ron Mokady1, Sagie Benaim1, Lior Wolf1,2, and Amit Bermano1 1The School of Computer Science, Tel Aviv University 2Facebook AI Research |
| Pseudocode | No | The paper describes the architecture and mathematical formulations (e.g., equations for loss functions), but it does not provide any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https: //github.com/rmokady/mbu-content-tansfer. |
| Open Datasets | Yes | We employ three attributes that are expressed locally in the images of the celeb A dataset Yang et al. (2015): smile, facial hair, and glasses. Handbags We also consider the domain of handbags Zhu et al. (2016). We further consider the ability of our method to perform translation on images from the out-of-distribution LFW dataset Huang et al. (2007). |
| Dataset Splits | No | The paper mentions "We constructed the train/test sets using 90%-95% split. This consists of about 7,200-18,000 examples for train and about 800-2,000 examples for test for each attribute." However, it does not explicitly mention a separate validation set split or its size. |
| Hardware Specification | No | The paper does not provide any specific hardware specifications (e.g., GPU model, CPU type) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of the "Adam optimizer with β1 = 0.5, β2 = 0.999, and learning rate of 0.0002." However, it does not specify any software names with version numbers for libraries, frameworks, or operating systems. |
| Experiment Setup | Yes | We use the Adam optimizer with β1 = 0.5, β2 = 0.999, and learning rate of 0.0002. We use a batch size of size 32 in training. |