Aligning Optimization Trajectories with Diffusion Models for Constrained Design Generation

Authors: Giorgio Giannone, Akash Srivastava, Ole Winther, Faez Ahmed

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our framework to structural topology optimization, a fundamental problem in mechanical design, evaluating its performance on in- and out-of-distribution configurations. Our results demonstrate that TA outperforms state-of-the-art deep generative models on in-distribution configurations and halves the inference computational cost.
Researcher Affiliation Collaboration Giorgio Giannone Massachusetts Institute of Technology Technical University of Denmark ggiorgio@mit.edu Akash Srivastava MIT-IBM Watson AI Lab akashsri@mit.edu Ole Winther Technical University of Denmark University of Copenhagen olwi@dtu.dk Faez Ahmed Massachusetts Institute of Technology faez@mit.edu
Pseudocode Yes C Algorithms Algorithm 1 DOM with Trajectory Alignment Require: Optimized Topologies X0 Require: Constraints C = (BC, L, V F) Require: Intermediate Optimization Steps Xopt while Training do Sample batch (x0, c, xopt) Compute Dense Relaxation k = K(bc, l) Compute Conditioning c = (k, c) Sample t, ϵ, xopt s(t) Compute xt q(xt|x0) Forward Model ϵθ(xt, c) Compute Loss Lt 1(x, c) = ||ϵθ(xt, c) ϵ||2 2 Trajectory Search xθ(xt, ϵθ) = (xt 1 αt ϵθ(xt, c))/ αt Trajectory Matching LTA = || xθ(xt, ϵθ) xopt s(t)||2 2 Compute Loss LDOM(θ) = Lt 1(x, c) + LTA Backpropagate θ θLDOM(θ) end while Algorithm 2 DOM without Trajectory Alignment
Open Source Code Yes Code: https://github.com/georgosgeorgos/trajectory-alignment-diffusion
Open Datasets Yes We use a dataset of optimized topologies gathered using SIMP as proposed in [66, 62]. ... We also release a large, multi-fidelity dataset of sub-optimal and optimal topologies obtained by solving minimum compliance optimization problems. This dataset contains low-resolution (64x64), high-resolution (256x256), optimal (120k), and suboptimal (600K) topologies. To our knowledge, this is the first large-scale dataset of optimized designs that also provides intermediate suboptimal iterations.
Dataset Splits No The paper states training on '30k optimized topologies' and testing on '1800 Test Configurations' but does not specify a validation split or how validation was performed.
Hardware Specification No The paper mentions that 'SIMP runs on CPU and the DDPM-based models on GPU' (Table 8 caption) but does not specify the models or other detailed hardware specifications for these components.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as libraries or programming languages used in the experiments.
Experiment Setup Yes We train all the models for 200k steps on 30k optimized topologies on a 64x64 domain. ... We set the hyperparameters, conditioning structure, and training routine as proposed in [62]. ... Table 11: Relevant Hyperparameters for baselines and DOM on 64x64 datasets. ... Batch size 64 Architecture Unet Iterations 200K Learning rate 2e-4 Loss Lepsilon Optimizer Adam