Learning Social Affordance for Human-Robot Interaction

Authors: Tianmin Shu, M. S. Ryoo, Song-Chun Zhu

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that our Markov Chain Monte Carlo (MCMC) based learning algorithm automatically discovers semantically meaningful social affordance from RGB-D videos, which allows us to generate appropriate full body motion for an agent.
Researcher Affiliation Academia 1 Center for Vision, Cognition, Learning and Autonomy, University of California, Los Angeles 2 School of Informatics and Computing, Indiana University, Bloomington
Pseudocode Yes Algorithm 1 Learning Algorithm; Algorithm 2 Motion Synthesis Algorithm
Open Source Code No The paper states: 'The dataset is available at: http://www.stat.ucla.edu/ tianmin.shu/Social Affordance.' This link is for the dataset, not the open-source code for the methodology. There is no explicit statement about releasing the code for the described methods.
Open Datasets Yes We collected a new RGB-D video dataset, i.e., UCLA Human-Human-Object Interaction (HHOI) dataset, which includes 3 types of human-human interactions, i.e., shake hands, high-five, pull up, and 2 types of human-object-human interactions, i.e., throw and catch, and hand over a cup. On average, there are 23.6 instances per interaction performed by totally 8 actors recorded from various views. The dataset is available at: http://www.stat.ucla.edu/ tianmin.shu/Social Affordance.
Dataset Splits Yes We split the instances by four folds for the training and testing where the actor combinations in the testing set are different from the ones in the training set.
Hardware Specification Yes Our learning algorithm converges within 100 outer loop iterations, which takes 3-5 hours to run on a PC with an 8-core 3.6 GHz CPU.
Software Dependencies No The paper mentions 'Our motion synthesis can be ran at the average speed of 5 fps with our unoptimized Matlab code.' However, it does not provide a specific version number for Matlab or any other software dependencies.
Experiment Setup Yes In practice, we set λ = 1. β = 0.3 and γ = 1.0 are the parameters for our CRP.