Learning to Predict Structural Vibrations

Authors: Jan van Delden, Julius Schultz, Christopher Blech, Sabine Langer, Timo Lüddecke

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To quantify such trade-offs systematically and foster the development of methods, we present a benchmark on the task of predicting the vibration of harmonically excited plates. Applying principles from operator learning and implicit models for shape encoding, our approach effectively addresses the prediction of highly variable frequency response functions occurring in dynamic systems. To quantify the prediction quality, we introduce a set of evaluation metrics and evaluate the method on our vibrating-plates benchmark.
Researcher Affiliation Academia 1Institute of Computer Science, University of G ottingen 2Institute for Acoustics and Dynamics, Technische Universit at Braunschweig
Pseudocode No The paper describes its methods and architectures but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code, dataset and visualizations: https://github.com/ecker-lab/Learning_Vibrating_Plates
Open Datasets Yes The dataset included in our work is published in a curated dataset repository hosted by the University of G ottingen and accompanied by structured metadata. It is available from: https://doi.org/10.25625/UWF7RB
Dataset Splits Yes For each setting, 5000 instances for training and validation are generated. 1000 further instances are generated as a test set and are not used during training or to select a model. [...] As a validation set, 500 samples from the training dataset are set and excluded from the training and the checkpoint with the lowest MSE on these samples is selected.
Hardware Specification Yes All trainings reported in this work were computed on a cluster with single A100 GPUs. [...] Computing a single sample out of the 6000 samples takes 2 minutes and 19 seconds on a machine with a 2 Ghz CPU (20 physical cores).
Software Dependencies No The paper mentions using 'Adam W optimizer', 'scipy', and 'MUMPS' but does not specify version numbers for these software components or libraries.
Experiment Setup Yes The networks are trained using the Adam W optimizer [74], with β = [0.9, 0.99] and weight decay set to 0.00005. We further choose a cosine learning rate schedule with a warm-up period [75] of 50 epochs. The maximum learning rate is set to 0.001, except for the UNet and Vi T architectures, for which it is set to 0.0005. In total, the networks are trained for 500 epochs.