That and There: Judging the Intent of Pointing Actions with Robotic Arms
Authors: Malihe Alikhani, Baber Khalid, Rahul Shome, Chaitanya Mitash, Kostas Bekris, Matthew Stone10343-10351
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These principles are evaluated through studies where English-speaking human subjects view animations of simulated robots instructing pick-and-place tasks. The evaluation distinguishes two classes of pointing actions... The study indicates that human subjects show greater flexibility... The results also demonstrate the effects of variation... |
| Researcher Affiliation | Academia | Malihe Alikhani, Baber Khalid, Rahul Shome, Chaitanya Mitash Kostas Bekris, Matthew Stone Computer Science, Rutgers University 110 Frelinghuysen Road Piscataway, NJ 08854-8019 firstname.lastname@rutgers.edu |
| Pseudocode | No | The paper describes the motion generation and pointing action generation processes but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1The data, code, and videos are available at https://github.com/malihealikhani/That and There. |
| Open Datasets | Yes | 1The data, code, and videos are available at https://github.com/malihealikhani/That and There. |
| Dataset Splits | No | This paper conducts a human subject study involving data collection and analysis of human judgments, rather than training a machine learning model. Therefore, the concepts of training and validation dataset splits in the context of model development are not applicable. |
| Hardware Specification | No | The paper mentions simulated robotic platforms (Rethink Baxter, Kuka IIWA14) and rendering in Blender, but it does not specify the CPU, GPU, or other computational hardware used to run the simulations or analyze the data. |
| Software Dependencies | No | The paper mentions the use of software such as Blender, Amazon Polly, and PRAAT (www.praat.org), and a robotic motion planning library (Littlefield et al. 2014), but it does not provide specific version numbers for these software components or any other dependencies. |
| Experiment Setup | Yes | Experiment Setup Each animation shows a simulated robot producing two pointing gestures to specify a pick-and-place task. Following the animation, viewers are asked whether a specific image represents a possible result of the specified task. Robotic Platforms The experiments were performed on two different robotic geometries, based on a Rethink Baxter, and a Kuka IIWA14. Workspace Setup Objects are placed in front of the manipulators. Motion Generation The end-effector of the manipulator is instructed to move to pre-specified waypoints... Pointing Action Generation Potential pointing targets are placed using a cone C(r, θ)... Speech Some experiments also included verbal cues with phrases like Put that there along with the pointing actions. |