Solving Inverse Physics Problems with Score Matching

Authors: Benjamin Holzschuh, Simona Vegetti, Nils Thuerey

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We highlight the advantages of our algorithm compared to standard denoising score matching and implicit score matching, as well as fully learned baselines for a wide range of inverse physics problems. The resulting inverse solver has excellent accuracy and temporal stability and, in contrast to other learned inverse solvers, allows for sampling the posterior of the solutions. Code and experiments are available at https://github.com/tum-pbs/SMDP. and 4 Experiments We show the capabilities of the proposed algorithm with a range of experiments.
Researcher Affiliation Academia 1Technical University of Munich, 85748 Garching, Germany 2Max Planck Institute for Astrophysics, 85748 Garching, Germany
Pseudocode No The paper describes methods and processes in narrative text and equations, but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes Code and experiments are available at https://github.com/tum-pbs/SMDP.
Open Datasets No The paper describes datasets generated by the authors (e.g., '2500 simulated trajectories', '2500 initial conditions', '250 simulations with corresponding trajectories generated with phiflow', '1000 simulation trajectories'). However, it does not provide concrete access information (link, DOI, repository, or formal citation for public access) for these datasets.
Dataset Splits No The paper mentions 'training data set' and 'test set' but does not explicitly specify the use of a distinct 'validation set' or provide details on how data is split into training, validation, and testing portions (e.g., percentages, counts, or cross-validation setup).
Hardware Specification Yes takes ca. 240 seconds per sample on a single NVIDIA RTX 2070 gpu.
Software Dependencies No The paper mentions software like 'JAX', 'phiflow', and 'torch.optim.LBFGS', but does not provide specific version numbers for these software dependencies (e.g., 'JAX 0.3.17' or 'phiflow 1.2').
Experiment Setup Yes For the 1-step loss and all data set sizes, we train for 250 epochs with a learning rate of 10e-3 and batch size of 256. In the first phase, we only keep every 5th point of a trajectory and discard the rest. Then, we again train for 250 epochs with the same batch size and a learning rate of 10e-4 but keep all points. Finally, we finetune the network with 750 training epochs and a learning rate of 10e-5.