Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset
Authors: Ruohan Zhang, Calen Walshe, Zhuode Liu, Lin Guan, Karl Muller, Jake Whritner, Luxin Zhang, Mary Hayhoe, Dana Ballard6811-6820
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the usefulness of the dataset through two simple applications: predicting human gaze and imitating human demonstrated actions. The quality of the data leads to promising results in both tasks. Moreover, using a learned human gaze model to inform imitation learning leads to an 115% increase in game performance. |
| Researcher Affiliation | Academia | Ruohan Zhang,1* Calen Walshe,2 Zhuode Liu,1 Lin Guan,1 Karl S. Muller,2 Jake A. Whritner,2 Luxin Zhang,3 Mary M. Hayhoe,2 Dana H. Ballard1,2 1Department of of Computer Science, University of Texas at Austin 2Center for Perceptual Systems, University of Texas at Austin 3The Robotics Institute, Carnegie Mellon University *zharu@utexas.edu |
| Pseudocode | No | The paper describes methods in text and provides network diagrams, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The dataset is named Atari HEAD (Atari Human Eye-Tracking And Demonstration)1. 1Available at: https://zenodo.org/record/3451402 |
| Open Datasets | Yes | The dataset is named Atari HEAD (Atari Human Eye-Tracking And Demonstration)1. 1Available at: https://zenodo.org/record/3451402 |
| Dataset Splits | No | We use 80% data for training and 20% for testing. The paper does not explicitly mention a validation split, only training and testing. |
| Hardware Specification | No | The paper mentions 'Eye Link 1000 eye tracker' for data collection, but does not provide specific hardware details (like GPU/CPU models) used for running the experiments or training the models. |
| Software Dependencies | No | The paper mentions 'Arcade Learning Environment (ALE)' and 'Python' implicitly through figure captions, but does not provide specific version numbers for any software dependencies used in the experiments (e.g., PyTorch, TensorFlow, or specific Python library versions). |
| Experiment Setup | Yes | We trained a convolution-deconvolution gaze network with KL divergence (ϵ = 1e-10) as loss function to predict human gaze positions. A separate network is trained for each game. |