ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills

Authors: Jiayuan Gu, Fanbo Xiang, Xuanlin Li, Zhan Ling, Xiqiang Liu, Tongzhou Mu, Yihe Tang, Stone Tao, Xinyue Wei, Yunchao Yao, Xiaodi Yuan, Pengwei Xie, Zhiao Huang, Rui Chen, Hao Su

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Mani Skill2 includes 20 manipulation task families with 2000+ object models and 4M+ demonstration frames, which cover stationary/mobile-base, single/dual-arm, and rigid/soft-body manipulation tasks with 2D/3D-input data simulated by fully dynamic engines. It defines a unified interface and evaluation protocol to support a wide range of algorithms (e.g., classic sense-plan-act, RL, IL), visual observations (point cloud, RGBD), and controllers (e.g., action type and parameterization).
Researcher Affiliation Collaboration 1University of California San Diego, 2Tsinghua University
Pseudocode Yes The simulation and 2-way coupling algorithm are summarized in algorithm 1.
Open Source Code Yes We open-source all codes of our benchmark (simulator, environments, and baselines) and host an online challenge open to interdisciplinary researchers. Codes: https://github.com/haosulab/ManiSkill2
Open Datasets Yes Mani Skill2 includes 20 manipulation task families with 2000+ object models and 4M+ demonstration frames, which cover stationary/mobile-base, single/dual-arm, and rigid/soft-body manipulation tasks with 2D/3D-input data simulated by fully dynamic engines.
Dataset Splits No The paper distinguishes between 'training objects' and 'test objects' but does not explicitly state a separate validation dataset split with specific percentages or counts for its experiments.
Hardware Specification Yes All frameworks are given a computation budget of 16 CPU cores (logical processors) of an Intel i9-9960X CPU with 128G memory and 1 RTX Titan GPU with 24G memory.
Software Dependencies No The paper mentions several software components like Python, SAPIEN, Warp, OpenAI Gym, PyBullet, and IMPALA. While some are cited, specific version numbers for all key dependencies (e.g., Python version, PyTorch version) used for reproducibility are not consistently provided.
Experiment Setup Yes Hyperparameters are shown in Table 9.