Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings

Authors: Kangwook Lee, Hoon Kim, Changho Suh

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our approach achieves the improved performance on the gaze estimation task, outperforming (Shrivastava et al., 2017).3 EXPERIMENTS 3.1 CROSS-DOMAIN APPEARANCE-BASED GAZE ESTIMATION In this section, we apply our methodology to tackle the cross-domain appearance-based gaze estimation problem, and evaluate its performance on the the MPIIGaze dataset (Zhang et al., 2015).
Researcher Affiliation Academia Kangwook Lee , Hoon Kim & Changho Suh School of Electrical Engineering KAIST Daejeon, South Korea {kw1jjang,gnsrla12,chsuh}@kaist.ac.kr
Pseudocode No The paper describes its algorithms using text and block diagrams (Figures 1, 2, and 4), but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a concrete access to source code for the methodology described in this paper, nor does it include a specific repository link or an explicit code release statement.
Open Datasets Yes We apply our methodology to tackle the cross-domain appearance-based gaze estimation problem, and evaluate its performance on the the MPIIGaze dataset (Zhang et al., 2015). To generate a synthetic dataset of labeled human eye images, we employ Unity Eyes, a highresolution 3D simulator that renders realistic human eye regions (Wood et al., 2016).
Dataset Splits No The paper mentions the use of a validation set and 'validation error' (Table 1, Section 3.1), but it does not provide specific details on the dataset split, such as percentages or sample counts for the training, validation, and test sets, that would allow for reproduction of the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using 'Cycle GAN' and an eye gaze prediction network based on the architecture from Shrivastava et al. (2017), but it does not provide specific version numbers for these software components or any other libraries needed to replicate the experiment.
Experiment Setup Yes The Cycle GAN was trained with batch size of 64 and learning of 2 10 4. The predictor network is trained with batches of size 512, until the validation error converges. For the choice of regularization parameters, we test the performance of our algorithm with λcyc {0, 1, 5, 10, 50} and λfeature {0, 0.1, 0.5, 1.0}. As a result, we observe that λcyc = 10 and λfeature = 0.5 achieved the best performance.