Fine-grained Differentiable Physics: A Yarn-level Model for Fabrics
Authors: Deshan Gong, Zhanxing Zhu, Andrew J. Bulpitt, He Wang
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through comprehensive evaluation and comparison, we demonstrate our model s explicability in learning meaningful physical parameters, versatility in incorporating complex physical structures and heterogeneous materials, data-efficiency in learning, and high-fidelity in capturing subtle dynamics. [...] 4 EXPERIMENTS |
| Researcher Affiliation | Academia | Deshan Gong1, Zhanxing Zhu2,3, Andrew J. Bulpitt1 and He Wang1 1School of Computing, University of Leeds 2School of Informatics, University of Edinburgh 3Peking University |
| Pseudocode | No | The paper describes methods through textual explanation and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available in: https://github.com/realcrane/Fine-grained-Differentiable-Physics-A-Yarn-level-Model-for-Fabrics.git |
| Open Datasets | No | We employ a traditional indifferentiable yarn-level simulator (Cirio et al., 2014) to generate the ground-truth data, and build a dataset of fabrics with three types of yarns and three types of woven patterns. |
| Dataset Splits | No | The paper specifies training on the first 5, 10, or 25 frames and testing on the whole 50 frames, but does not explicitly mention a separate validation split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software for data generation ('Cirio et al., 2014') and algorithms ('implicit Euler', 'Stochastic Gradient Descent'), but does not specify version numbers for any key software components or libraries used in their implementation. |
| Experiment Setup | Yes | We use Stochastic Gradient Descent and run 70 epochs for training. The simulation is conducted for 500 steps with h = 0.001s. |