Thin-Shell Object Manipulations With Differentiable Physics Simulations

Authors: Yian Wang, Juntian Zheng, Zhehuan Chen, Zhou Xian, Gu Zhang, Chao Liu, Chuang Gan

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments suggest that manipulating thin-shell objects presents several unique challenges: ... We conduct a comprehensive performance evaluation of various methods, including gradient-based trajectory optimization (GD), model-free Reinforcement Learning (RL) algorithms (Soft Actor Critic SAC (Haarnoja et al., 2018) and Proximal Policy Optimization PPO(Schulman et al., 2017)), and a sampling-based trajectory optimization method named Covariance Matrix Adaptation Evolution Strategy (CMA-ES) (Hansen & Ostermeier, 2001).
Researcher Affiliation Collaboration Yian Wang1 Juntian Zheng2 Zhehuan Chen3 Zhou Xian4 Gu Zhang5 Chao Liu6 Chuang Gan1,7 1Umass Amherst 2Tsinghua University 3Peking University 4CMU 5SJTU 6MIT 7MIT-IBM
Pseudocode No The paper does not contain any sections explicitly labeled "Pseudocode" or "Algorithm", nor does it present structured steps in a code-like format.
Open Source Code No The paper mentions a project website for "Video demonstration and more information", but does not explicitly state that the source code for Thin Shell Lab or the methods described is publicly available or released. It does mention using "open-source implementations for the RL algorithms (Raffin et al., 2021) and CMA-ES (Hansen et al., 2019)", but these refer to third-party tools, not the authors' own code for the presented work.
Open Datasets No The paper describes the creation of a new simulation platform and benchmark tasks where data is dynamically generated through simulation. It does not refer to the use of any pre-existing, publicly available or open datasets for training.
Dataset Splits No The paper does not specify exact training, validation, or test dataset splits in terms of percentages or absolute sample counts. The experiments are conducted within their custom simulation environment, and methods are evaluated on tasks within this environment, rather than on pre-defined static dataset splits.
Hardware Specification Yes The test is done on a single RTX 4090.
Software Dependencies Yes At the core of Thin Shell Lab, we present a fully differentiable simulation engine, implemented using Taichi programming language (Hu et al., 2019; 2020)... To ensure replicability, we use open-source implementations for the RL algorithms (Raffin et al., 2021) and CMA-ES (Hansen et al., 2019). For gradient-based trajectory optimization, we employ the Adam optimizer. ... With the help of Sym Py library for symbolic computation, we optimized the computation graph and hard coded it into Taichi snippets.
Experiment Setup Yes In all our tasks, we consistently utilize a simulation step of 5e-3 seconds, and we establish a maximum action range specific to each task to ensure system stability. For the majority of our tasks, we confine the action range to 1 millimeter per timestep. However, for tasks that inherently demand higher speeds, such as the Separate and Following tasks, we extend the action range to 2 millimeters. ... We opt for a population size of 40 in CMA-ES with zero-initialized trajectories. The initial variance is set to 1.0, corresponding to 0.0003 in each dimension of manipulator movement per timestep. ... Generally, we execute CMA-ES for 80 percents of the total episodes, where episodes are defined by the number of rollouts. For the Pick Folding task, which demands more extensive training, we set the CMA-ES episode count to 1000 and supplement it with an additional 150 episodes of gradient descent.