Scalable Differentiable Physics for Learning and Control
Authors: Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming Lin
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a variety of experiments to evaluate the presented approach. We begin by comparing to other differentiable physics frameworks, with particular attention to scalability and generality. We then conduct ablation studies to evaluate the impact of the techniques presented in Sections 5 and 6. Lastly, we provide case studies that illustrate the application of the presented approach to inverse problems and learning control. |
| Researcher Affiliation | Collaboration | 1University of Maryland, College Park 2Intel Labs. |
| Pseudocode | No | The paper describes methods and derivations but does not include explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available on our project page: https://gamma.umd.edu/researchdirections/mlphysics/diffsim |
| Open Datasets | No | The paper describes custom benchmark scenes and application scenarios (e.g., 'cubes are stacked', 'marble is supported by a soft sheet') but does not specify or provide access information for any publicly available or open datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages, sample counts, or citations to predefined splits) for training, validation, or testing. |
| Hardware Specification | Yes | All experiments are run on an Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz. |
| Software Dependencies | Yes | The automatic differentiation is implemented in Py Torch 1.3 (Paszke et al., 2019). |
| Experiment Setup | Yes | The neural network is an MLP with 50 nodes in the first layer and 200 nodes in the second, with Re LU activations. In each episode, we fix the initial position of the manipulator and the object while the target position is randomized. |