Action-Conditioned Generation of Bimanual Object Manipulation Sequences

Authors: Haziq Razali, Yiannis Demiris

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on the KIT Motion Capture and KIT RGBD Bimanual Manipulation datasets and show improvements over a simplified approach that treats the entire body as a single entity, and existing whole-body-only methods.
Researcher Affiliation Academia Haziq Razali, Yiannis Demiris* Personal Robotics Lab, Dept. of Electrical and Electronic Engineering, Imperial College London {h.bin-razali20,y.demiris}@imperial.ac.uk
Pseudocode No The paper describes the method using text and equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code at www.imperial.ac.uk/personal-robotics/software
Open Datasets Yes Datasets The KIT Motion Capture Dataset (Krebs et al. 2021) contains motion capture data of a right-handed person performing bimanual actions such as Cut, Pour, Stir, etc.The KIT RGBD Dataset (Dreher, W achter, and Asfour 2019) differs in that it contains RGBD recordings instead of motion capture.
Dataset Splits No The paper mentions "train:test ratio of 70:30" but does not explicitly state a separate validation split or its proportions.
Hardware Specification Yes The experiments were implemented using Py Torch 1.12.0 installed on an Ubuntu 20.04 machine with an NVIDIA RTX-2080.
Software Dependencies Yes The experiments were implemented using Py Torch 1.12.0 installed on an Ubuntu 20.04 machine with an NVIDIA RTX-2080.
Experiment Setup Yes We train all models including ours for 1000 epochs using the ADAM optimizer (Kingma and Ba 2014) with an initial learning rate of 1e-3 and batch size of 32.