Forecasting Bimanual Object Manipulation Sequences from Unimanual Observations
Authors: Haziq Razali, Yiannis Demiris
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our approach outperforms the state-of-the-art pose forecasting methods on bimanual manipulation datasets. The paper includes sections like 'Experiments' and 'Quantitative Results', detailing evaluations on datasets like KIT Mo Cap and KIT RGBD, using metrics such as Mean Per Joint Position Error, and conducting comparisons to baseline methods. |
| Researcher Affiliation | Academia | Haziq Razali, Yiannis Demiris* Personal Robotics Lab, Dept. of Electrical and Electronic Engineering, Imperial College London {h.bin-razali20,y.demiris}@imperial.ac.uk |
| Pseudocode | No | The paper describes its models and processes using mathematical equations and textual explanations, but it does not include a formal pseudocode block or algorithm figure. |
| Open Source Code | No | Code at www.imperial.ac.uk/personal-robotics/software. This link leads to a general lab software page, not a direct or specific code repository for the methodology presented in this paper, making it insufficient for concrete access. |
| Open Datasets | Yes | The KIT Bimanual Manipulation (KIT Mo Cap) (Krebs et al. 2021) dataset contains 2 hour motion capture recordings... The KIT RGBD Bimanual Actions (KIT RGBD) (Dreher, Wchter, and Asfour 2019) dataset also contain 2 hour recordings... |
| Dataset Splits | No | The paper uses the KIT Mo Cap and KIT RGBD datasets for evaluation, but it does not explicitly state the training, validation, or test dataset splits (e.g., specific percentages or sample counts). |
| Hardware Specification | Yes | trained using the ADAM (Kingma and Ba 2015) optimizer for 500 epochs with a batch size of 64 until convergence, for about 12 hours on a NVIDIA RTX 2080. |
| Software Dependencies | No | The experiments were implemented in Py Torch and trained using the ADAM (Kingma and Ba 2015) optimizer. However, specific version numbers for PyTorch or other software dependencies are not provided. |
| Experiment Setup | Yes | We train our model to observe the past 10 time steps (1 second) to predict the future 20 (2 seconds). All models contain 3 million parameters... trained using the ADAM (Kingma and Ba 2015) optimizer for 500 epochs with a batch size of 64 until convergence... and {λ1, λ2} = {10 2, 10 1}. |