DaxBench: Benchmarking Deformable Object Manipulation with Differentiable Physics
Authors: Siwei Chen, Yiqing Xu, Cunjun Yu, Linfeng Li, Xiao Ma, Zhongwen Xu, David Hsu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper presents Da XBench, a differentiable simulation framework for DOM. While existing work often focuses on a specific type of deformable objects, Da XBench supports fluid, rope, cloth ...; it provides a general-purpose benchmark to evaluate widely different DOM methods, including planning, imitation learning, and reinforcement learning. Da XBench combines recent advances in deformable object simulation with JAX, a high-performance computational framework. All DOM tasks in Da XBench are wrapped with the Open AI Gym API for easy integration with DOM algorithms. We hope that Da XBench provides to the research community a comprehensive, standardized benchmark and a valuable tool to support the development and evaluation of new DOM methods. |
| Researcher Affiliation | Collaboration | Siwei Chen1 , Yiqing Xu1 , Cunjun Yu1 , Linfeng Li1, Xiao Ma2, Zhongwen Xu2, David Hsu1 1 National University of Singapore 2 Sea AI Lab |
| Pseudocode | No | The paper includes code examples in Appendix A.2 (Listing 1 and Listing 2), but these are not presented as formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and video are available online. The link of the project is https://github.com/AdaCompNUS/DaXBench. We have included our source code as an easy-to-install package in the supplementary material. |
| Open Datasets | No | The paper introduces a benchmark framework (Da XBench) with various tasks (e.g., Pour-Water, Push-Rope) which serve as environments for training and evaluation. It does not provide or refer to a pre-existing publicly available dataset in the traditional sense of a static collection of data files for training. |
| Dataset Splits | No | The paper describes training and evaluation procedures and mentions using multiple seeds and rollouts, but it does not specify explicit training/validation/test dataset splits with percentages, counts, or references to predefined splits, as it operates within a simulation environment rather than on static datasets. |
| Hardware Specification | Yes | Using our implementation, we can finish an iteration (both forward and backward pass) for 128 rollouts with 80 timesteps on a server with 4 2080-Ti GPUs in 3 seconds. |
| Software Dependencies | No | The paper mentions key software components like "JAX (Bradbury et al., 2018)" and "Open AI Gym API (Brockman et al., 2016)", but it does not provide specific version numbers for these software dependencies, which are required for full reproducibility. |
| Experiment Setup | Yes | For each task, the goal is specified by the desired final positions of the set of particles, g, representing the deformable object. In this setup, the reward can be intuitively defined by how well the current object particles match with those of the goal. Hence, we define the ground-truth reward as rgt(s, a) = exp( λD(s , g)), where s is the next state resulted from a at the current state s, and D(s, g) is a non-negative distance measure between the positions of the object s particles s and those of the goal g. ... During training, we use the sum of the groundtruth and auxiliary reward functions rgt + raux, and during evaluation, we only use the ground-truth reward rgt. We report the mean and variance of the performance for 5 seeds and each seed has 20 rollouts with different initializations. |