DNN-based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel

Authors: Benjamin Dupuis, Arthur Jacot

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically confirm our theoretical observations and study how the filter size is affected by the architecture of the network. Our solution can easily be applied to any other coordinates-based generation method. ... We confirm and illustrate these theoretical observations with numerical experiments.
Researcher Affiliation Academia Benjamin Dupuis Chair of Statistical Field Theory Ecole Polytechnique F ed erale de Lausanne Lausanne, Switzerland benjamin.dupuis@epfl.ch; Arthur Jacot Chair of Statistical Field Theory Ecole Polytechnique F ed erale de Lausanne Lausanne, Switzerland arthur.jacot@epfl.ch
Pseudocode No The paper describes methods and propositions but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Our implementation of the algorithm will be made public at https://github.com/benji Dupuis/Deep Topo.
Open Datasets No The paper describes a topology optimization problem on a grid and does not mention using or providing access to any specific publicly available dataset. It refers to established methods for SIMP ([1] and [18]) but not for the data itself.
Dataset Splits No The paper describes an optimization problem setup and does not mention explicit training, validation, or test dataset splits in the context of machine learning datasets.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or memory specifications used for running experiments. It only mentions that 'Most of our experiments were conducted with a torus embedding or a gaussian embedding.'
Software Dependencies No The paper mentions that its SIMP implementation is based on [1] and [18], and that it uses sparse Cholesky factorisation [9, 8] or BICGSTAB method [33]. However, it does not specify versions for any programming languages, libraries, or software packages (e.g., Python, PyTorch, TensorFlow, specific solvers with version numbers).
Experiment Setup Yes Here are the hyperparameters used in the experiments. For the Gaussian embedding, we used n0 = 1000 and a length scale ℓ= 4. This embedding was followed by one hidden linear layer of size 1000 with standardized Re Lu (x 7 2 max(0, x)) and a bias parameter β = 0.5. For the torus embedding we set the torus radius to r = 2 ... and the discretisation angle to δ = π 2 max(nx,ny). It was followed by 2 linear layers of size 1000 with β = 0.1. ... We used a cosine activation of the form x 7 cos(ωx), ... When not stated otherwise we used ω = 5. ... we obtain similar results with other optimizers such as RPROP [24] (learning rate 10 3) and ADAM [16] (learning rate 10 3).