NTopo: Mesh-free Topology Optimization using Implicit Neural Representations

Authors: Jonas Zehnder, Yue Li, Stelian Coros, Bernhard Thomaszewski

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments indicate that our method is highly competitive for minimizing structural compliance objectives, and it enables self-supervised learning of continuous solution spaces for topology optimization problems.
Researcher Affiliation Academia Jonas Zehnder Department of Computer Science and Operations Research Université de Montréal jonas.zehnder@umontreal.ca; Yue Li Department of Computer Science ETH Zurich yue.li@inf.ethz.ch; Stelian Coros Department of Computer Science ETH Zurich scoros@inf.ethz.ch; Bernhard Thomaszewski Department of Computer Science ETH Zurich bthomasz@ethz.ch
Pseudocode Yes Algorithm 1 NTopo: Neural topology optimization.
Open Source Code No The paper does not provide any explicit statements or links indicating that open-source code for the described methodology is available.
Open Datasets No The paper uses a self-supervised learning approach where data is sampled and generated during the optimization process rather than relying on a pre-defined public dataset with explicit access information. Therefore, it does not provide concrete access to a publicly available training dataset.
Dataset Splits No The paper describes its data generation process through Monte Carlo sampling and refers to 'stratified samples' but does not specify traditional training, validation, or test dataset splits.
Hardware Specification Yes All timings in the paper are reported on a Ge Force RTX 2060 SUPER graphics card.
Software Dependencies No The paper mentions using 'Adam [57] as our optimizer' and 'SIREN [4] as neural representation', but it does not specify any version numbers for these or other software components (e.g., Python, PyTorch, CUDA).
Experiment Setup Yes We use Adam [57] as our optimizer for both displacement and density networks and the learning rate of both is set to be 3 10 4 for all experiments. We use ω0 = 60 for the first layer and 60 neurons in each hidden layer in 2D, and 180 hidden neurons in 3D. For the solution space learning setup, we use 256 neurons in each hidden layer in the density network to represent the larger solution space. For all experiments, we initialize the output of the density network close to a uniform density distribution of the target volume constraint by initializing the weights of the last layer close to zero and adjusting the bias accordingly. We used E1 = 1, ν = 0.3, p = 3, nb = 50 and [nx, ny] = [150, 50] in 2D and [nx, ny, nz] = [80, 40, 20] in 3D.