PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation
Authors: Yihua Cheng, Yiwei Bao, Feng Lu436-443
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first conduct experiments in four cross-dataset tasks. The result is shown in Tab. 1. |
| Researcher Affiliation | Academia | 1State Key Laboratory of VR Technology and Systems, School of CSE, Beihang University 2Peng Cheng Laboratory, Shenzhen, China {yihua c, baoyiwei, lufeng}@buaa.edu.cn |
| Pseudocode | No | The paper describes the architecture and algorithms using text and diagrams, but it does not include a formal pseudocode or algorithm block. |
| Open Source Code | Yes | The code is released in https://github.com/yihuacheng/Pure Gaze. |
| Open Datasets | Yes | We use Gaze360 (Kellnhofer et al. 2019) and ETH-XGaze (Zhang et al. 2020) as training set, since they have a large number of subjects, various gaze range and head pose. We test our model in two popular datasets, which are MPIIGaze (Zhang et al. 2017) and Eye Diap (Funes Mora, Monay, and Odobez 2014). |
| Dataset Splits | Yes | ETHXGaze (Zhang et al. 2020) contains a total of 1.1M images from 110 subjects. It provides a training set containing 80 subjects. We split 5 subjects for validation and others are used for training. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions implementing some methods with Pytorch ('We implement Full-Face and Dilated-Net using Pytorch'), but it does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | We set four values for σ2, which are 10, 20, 30 and 40, and also evaluate the performance without the loss. As for TALoss, we set four values for k, which are 0, 0.25, 0.5 and 0.75. |