Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Real-time design of architectural structures with differentiable mechanics and neural networks

Authors: Rafael Pastrana, Eder Medina, Isabel M. de Oliveira, Sigrid Adriaenssens, Ryan P Adams

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach in two tasks, the design of masonry shells and cable-net towers. Our model achieves better accuracy and generalization than fully neural alternatives, and comparable accuracy to direct optimization but in real time, enabling fast and reliable design exploration. We further demonstrate its advantages by integrating it into 3D modeling software and fabricating a physical prototype.
Researcher Affiliation Academia 1Architecture 2Computer Science 3Civil and Environmental Engineering Princeton University EMAIL
Pseudocode No The paper describes methods and models using mathematical equations and textual explanations, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured steps formatted like code.
Open Source Code Yes Source code is available at https://github.com/princetonlips/neural_fdm.
Open Datasets No We generate training data by sampling batches of target shapes ˆX from a task-specific family of shapes ˆ X parametrized by a probability distribution (Section 4). Our model can be trained end-to-end because the encoder and decoder are both implemented in a differentiable programming environment (Bradbury et al., 2018). ... To this end, we first generate a test dataset of 100 asymmetric Bezier surfaces and create target geometries by linearly interpolating between the existing doubly symmetric dataset and the new asymmetric dataset (Appendix C).
Dataset Splits No We evaluate model performance by measuring the shape loss Lshape and the inference wall time on a test dataset of target shapes. ... We generate training data by sampling batches of target shapes ˆX from a task-specific family of shapes ˆ X parametrized by a probability distribution (Section 4).
Hardware Specification Yes Training and inference for all models are executed on a CPU, on a Macbook Pro laptop with an M2 chip.
Software Dependencies No We implement our work in JAX (Bradbury et al., 2018) and Equinox (Kidger and Garcia, 2021). We use JAX FDM (Pastrana et al., 2023b) as our differentiable simulator and Optax (Deep Mind et al., 2020) for derivatives processing.
Experiment Setup Yes We train the three models using Adam (Kingma and Ba, 2017) with default parameters, except for the learning rate, for 10,000 steps. The training hyperparameters per model are tuned to maximize predictive performance on the test dataset via a random search over three seeds. ... For the shells task, we train MLPs with 2 hidden layers with H = 256 units each. The size of the input and output layers varies as per Table 3. The batch size is B = 64. The learning rate is 3 × 10−5 for the fully neural baselines (NN and PINN), and 5 × 10−5 for our model. ... In the cable-net towers task, we train MLPs with 4 hidden layers of size H = 256. In all the models, we set the batch size to B = 16, ... The global gradient clip value is 0.01 for all the neural models.