DiffVL: Scaling Up Soft Body Manipulation using Vision-Language Driven Differentiable Physics

Authors: Zhiao Huang, Feng Chen, Yewen Pu, Chunru Lin, Hao Su, Chuang Gan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments In this section, we aim to justify the effectiveness of our vision-language task representation in guiding the differentiable physics solver.
Researcher Affiliation Collaboration Zhiao Huang Computer Science & Engineering University of California, San Diego z2huang@ucsd.edu Feng Chen Institute for Interdisciplinary Information Sciences Tsinghua University chenf20@mails.tsinghua.edu.cn Yewen Pu Autodesk yewen.pu@autodesk.com Chunru Lin UMass Amherst chunrulin@umass.edu Hao Su University of California, San Diego haosu@ucsd.edu Chuang Gan MIT-IBM Watson AI Lab, UMass Amherst chuangg@umass.edu
Pseudocode No The paper describes a 'domain-specific language (DSL)' and shows examples of 'optimization programs' in Figure 5 and Table 2, but it does not present a formal 'Pseudocode' or 'Algorithm' block for the overall method.
Open Source Code No 3both the GUI and dataset will be made public. (This is a future promise, not current concrete access to source code for the described methodology).
Open Datasets No Using our task annotation tool, we have created a dataset called Soft VL100, which consists of 100 tasks, and there are more than 4 stages on average. [...] 3both the GUI and dataset will be made public. (The dataset is created but concrete access information like a link, DOI, or specific citation for public availability is not provided).
Dataset Splits No The paper states 'We picked 20 representative task stages as our test bed from the Soft VL100' but does not provide specific percentages or counts for training, validation, and test splits needed for reproducibility.
Hardware Specification Yes For a single-stage task, it takes 10 minutes for 300 gradient descent steps on a machine with NVIDIA Ge Force RTX 2080, for optimizing a trajectory with 80 steps.
Software Dependencies No The paper mentions 'Py Torch [68]' and 'stable-baselines3' but does not provide specific version numbers for these software components.
Experiment Setup Yes For differentiable physics solvers, we run Adam [39] optimization for 500 gradient steps using a learning rate of 0.02. [...] Table 3: Parameters for Reinforcement Learning