Abstract-to-Executable Trajectory Translation for One-Shot Task Generalization
Authors: Stone Tao, Xiaochen Li, Tongzhou Mu, Zhiao Huang, Yuzhe Qin, Hao Su
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various unseen longhorizon tasks with different robot embodiments demonstrate the practicability of our methods to achieve one-shot task generalization. Videos and more details can be found on the project page: https://trajectorytranslation.github.io/. |
| Researcher Affiliation | Academia | Stone Tao 1 Xiaochen Li 1 Tongzhou Mu 1 Zhiao Huang 1 Yuzhe Qin 1 Hao Su 1 1UC San Diego. Correspondence to: Stone Tao <stao@ucsd.edu>. |
| Pseudocode | No | The paper describes the steps of the method in text (e.g., in Section 3.1, 'Given a novel task, our method seeks to solve it with the following three steps:'), and shows diagrams (e.g., Figure 2 for architecture), but does not contain explicitly structured pseudocode or algorithm blocks. |
| Open Source Code | No | Videos and more details can be found on the project page: https://trajectorytranslation.github.io/. |
| Open Datasets | No | The paper describes experiments conducted within environments like SAPIEN and recreated SILO tasks, implying data is generated within these environments, but it does not provide concrete access information (e.g., links, DOIs, or specific citations to publicly available datasets) for the data used for training. |
| Dataset Splits | No | The paper distinguishes between training tasks and test tasks/environments (e.g., 'Couch Moving Short 3 is the training task' and 'test tasks are all long variations'). However, it does not explicitly describe a separate validation set split or its use for hyperparameter tuning, beyond the general train/test distinction. |
| Hardware Specification | No | The paper specifies hardware used for real-world experiments (UFactory x Arm 7 robot, Intel Real Sense sensor), but does not provide specific details (e.g., GPU/CPU models, memory) for the hardware used to run the bulk of the simulated experiments. |
| Software Dependencies | No | The paper mentions software components such as PPO, SAPIEN, and GPT-2, but does not provide specific version numbers for any of these components or for programming languages/libraries used. |
| Experiment Setup | Yes | In all experiments, online training hyperparameters are mostly kept the exact same, see Sec. G for specific hyperparameters used. Moreover, all results reported are averaged over 5 seeds. |