Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate

Authors: Xingyuan Sun, Tianju Xue, Szymon Rusinkiewicz, Ryan P. Adams

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the approach on two case studies: extruder path planning in additive manufacturing and constrained soft robot inverse kinematics. We compare our approach to direct optimization of the design using the learned surrogate, and to supervised learning of the synthesis problem. We find that our approach produces higher quality solutions than supervised learning, while being competitive in quality with direct optimization, at a greatly reduced computational cost.
Researcher Affiliation Academia 1Department of Computer Science 2Department of Civil and Environmental Engineering Princeton University {xs5, txue, smr, rpa}@princeton.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described in the paper.
Open Datasets No To generate the dataset for calibrating the decoder, we first use elliptical slice sampling [63] (New BSD License) to sample random extruder paths from a Gaussian process. We then use a physical simulator built using Bullet [22] (zlib License), calibrated to a real printer, to predict the realization (fiber path) for each extruder path. We generate 10,000 paths, split into 90% training, 5% validation, and 5% testing.
Dataset Splits Yes We generate 10,000 paths, split into 90% training, 5% validation, and 5% testing.
Hardware Specification Yes We then evaluate the inference time of the three algorithms on a server with two Intel(R) Xeon(R) E5-2699 v3 CPUs running at 2.30GHz. Since small neural networks generally run faster on the CPU, we run all of the tests solely on CPU.
Software Dependencies No We train every model with a learning rate of 1 10 3 for 10 epochs using Py Torch [68] and Adam optimizer [54]. For direct-optimization, we use the BFGS implementation in Sci Py [92]. The specific version numbers for these software components are not explicitly stated in the text describing their use.
Experiment Setup Yes For decoder, encoder, and direct-learning, we use an MLP with 5 hidden layers and ReLU as the activation function. We train every model with a learning rate of 1 10 3 for 10 epochs using Py Torch [68] and Adam optimizer [54].