Identifying Analogies Across Domains

Authors: Yedid Hoshen, Lior Wolf

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate our approach we conducted matching experiments on multiple public datasets. We have evaluated several scenarios: (i) Exact matches: Datasets on which all A and B domain images have... (ii) Partial matches:... (iii) Inexact matches:... (iv) Inexact point cloud matching:...
Researcher Affiliation Collaboration Yedid Hoshen1 and Lior Wolf1,2 1Facebook AI Research 2Tel Aviv University
Pseudocode No The paper describes the methodology using equations and textual explanations, but it does not include any clearly labeled pseudocode or algorithm blocks/figures.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the described methodology (AN-GAN) is publicly available.
Open Datasets Yes To evaluate our approach we conducted matching experiments on multiple public datasets. Facades: 400 images of building facades aligned with segmentation maps of the buildings (Radim Tyleˇcek, 2013). Maps: The Maps dataset was scraped from Google Maps by (Isola et al., 2017). E2S: The original dataset contains around 50K images of shoes from the Zappos50K dataset (Yu & Grauman, 2014), (Yu & Grauman). E2H: The original dataset contains around 137k images of Amazon handbags ((Zhu et al., 2016)).
Dataset Splits No The paper mentions using a 'training set' for the Maps dataset and refers to 'test set' accuracy/error for evaluation, but it does not specify the explicit percentages or counts for train/validation/test splits for any of the datasets used.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used to conduct the experiments.
Software Dependencies No The paper mentions several architectures and methods (e.g., Cycle GAN, VGG-16, U-net, Disco GAN, Pix2Pix) but does not specify version numbers for any software libraries, frameworks, or programming languages used in the implementation.
Experiment Setup Yes Implementation: Initially β are all set to 0 giving all matches equal likelihood. We use an initial burn-in period of 200 epochs, during which δ = 0... We then optimize the examplar-loss for one α-iteration of 22 epochs, one T-iteration of 10 epochs and another α-iteration of 10 epochs... The initial learning rate for the exemplar loss is 1e 3 and it is decayed after 20 epochs by a factor of 2. We use the same architecture and hyper-parameters as Cycle GAN (Zhu et al., 2017) unless noted otherwise.