Inferring Human Attention by Learning Latent Intentions

Authors: Ping Wei, Dan Xie, Nanning Zheng, Song-Chun Zhu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a new 3D human attention dataset prove the strength of our method.
Researcher Affiliation Academia 1Xi an Jiaotong University, Xi an, China 2University of California, Los Angeles, Los Angeles, USA
Pseudocode No The paper does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor are there any structured algorithm blocks presented.
Open Source Code No The paper does not provide any statement or link regarding the public release of its source code.
Open Datasets No We collected a new dataset of 3D human attention. This dataset includes a total of 150 RGB-D videos with 3D human skeletons and 14 activity categories: drink water with mug, drink water from fountain, mop floor, fetch water from dispenser, fetch object from box, write on whiteboard, move bottle, write on paper, watch TV, throw trash, use computer, use elevator, use microwave, and use refrigerator. No access information (URL, DOI, or repository) is provided for this dataset.
Dataset Splits No The paper mentions 'training sequence' and 'EM-based learning algorithm' but does not provide specific numerical details for dataset splits (e.g., percentages or counts for training, validation, or test sets).
Hardware Specification No The paper mentions that data was captured by a 'Kinect camera' but provides no specific details about the computational hardware (e.g., CPU, GPU models, memory, or cloud resources) used for running experiments.
Software Dependencies No The paper mentions methods like EM-based approach, LDS, and Support Vector Machine (with a citation to LIBSVM), but does not provide specific version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup No The paper describes the overall model and learning algorithm but does not provide specific details on experimental setup parameters such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings.