Interactive, Collaborative Robots: Challenges and Opportunities

Authors: Danica Kragic, Joakim Gustafson, Hakan Karaoguz, Patric Jensfelt, Robert Krug

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate our approach to interactive and collaborative robotics by means of a pick & place scenario where a human operator and a robot interact with each other in a joint LEGO picking task (see Fig. 1). In a user study, 29 subjects performed a task where a human operator and a robot took turns in asking the other to pick up one of the LEGO pieces on the table. They did three trials where they pick and placed 15 objects with the different grounding modalities. After each trial, they answered subjective questions from The Presence Inventory [Lombard et al., 2009] and the Presence Questionnaire [Witmer and Singer, 1998]. In the subjective measures we found that the Mixed Reality system was most engaging, but least observable (due to the limited screen size in the head-mounted display used). Using projection onto the table was considered best overall, providing the observability with the least display interference with the task. We did not find any significant differences in completion times in the different modalities, and they led to very similar error rates.
Researcher Affiliation Academia Danica Kragic1, Joakim Gustafson2, Hakan Karaoguz1, Patric Jensfelt1 and Robert Krug1 1 Robotics, Perception and Learning lab, KTH Royal Institute of Technology, Stockholm, Sweden 2 Speech, Music and Hearing department, KTH Royal Institute of Technology, Stockholm, Sweden dani@kth.se, jocke@speech.kth.se, hkarao@kth.se, patric@kth.se, rkrug@kth.se
Pseudocode No The paper does not contain STRUCTURED PSEUDOCODE OR ALGORITHM BLOCKS (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper does not provide CONCRETE ACCESS TO SOURCE CODE (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets No The paper describes a custom setup involving "LEGO pieces" and a "user study" but does not provide concrete access information (link, DOI, repository name, formal citation with authors/year, or reference to established benchmark datasets) for a publicly available or open dataset used in the experiments.
Dataset Splits No The paper describes a user study but does not provide SPECIFIC DATASET SPLIT INFORMATION (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper mentions robotic hardware components like the "dual-arm ABB Yu Mi manipulator" and "Kinect structured-light sensor" but does not provide SPECIFIC HARDWARE DETAILS (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper discusses various concepts and frameworks (e.g., Behavior Trees, subsumption architecture) and general methods (e.g., deep learning), but does not provide SPECIFIC ANCILLARY SOFTWARE DETAILS (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup No The paper describes the general setup of the interactive pick & place scenario and mentions the use of "simple predefined PD-control laws" and a "common state machine", but it does not provide SPECIFIC EXPERIMENTAL SETUP DETAILS (concrete hyperparameter values, training configurations, or system-level settings) in the main text.