A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation
Authors: Yihua Cheng, Shiyao Huang, Fei Wang, Chen Qian, Feng Lu10623-10630
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments are conducted in two popular gaze estimation datasets: MPIIGaze (Zhang et al. 2017b) and Eye Diap (Mora, Monay, and Odobez 2014). MPIIGaze is the largest dataset for appearance-based gaze estimation which provides 3D gaze directions. It is common used in the evaluation of appearance-based methods... We conduct experiments in the evaluation set rather than the full set. |
| Researcher Affiliation | Collaboration | 1State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University, China 2Sense Time Co., Ltd. 3Peng Cheng Laboratory, Shenzhen, China 4Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | The experiments are conducted in two popular gaze estimation datasets: MPIIGaze (Zhang et al. 2017b) and Eye Diap (Mora, Monay, and Odobez 2014). |
| Dataset Splits | Yes | MPIIGaze provides a standard evaluation protocol, which selects 3000 images for each subject to compose the evaluation set. We conduct experiments in the evaluation set rather than the full set. We use the videos collected under screen target sessions and sample one image per fifteen frames to construct the evaluation set. ...for the two datasets, we both apply the leave-one-person-out strategy to obtain robust results. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | We implement CA-Net by using Pytorch. (No version number provided for Pytorch) |
| Experiment Setup | Yes | We train the whole network in 200 epochs with 32 batch size. The Learning rate is set as 0.001. We initialize the weights of all layers with MSRA initialization (He et al. 2015). |