Learning Similarity Metrics for Numerical Simulations

Authors: Georg Kohl, Kiwon Um, Nils Thuerey

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate that the proposed approach outperforms existing metrics for vector spaces and other learned, image-based metrics, we evaluate the different methods on a large range of test data. Additionally, we analyze generalization benefits of an adjustable training data difficulty and demonstrate the robustness of LSi M via an evaluation on three real-world data sets.
Researcher Affiliation Academia 1Department of Informatics, Technical University of Munich, Munich, Germany. Correspondence to: Georg Kohl <georg.kohl@tum.de>.
Pseudocode No The paper describes the architecture and processes with flow diagrams and text, but it does not contain any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code Yes Our source code, data sets, and final model are available at https://github.com/tum-pbs/LSIM.
Open Datasets Yes Our source code, data sets, and final model are available at https://github.com/tum-pbs/LSIM. We created four training (Smo, Liq, Adv and Bur) and two test data sets (Liq N and Adv D) with ten parameter steps for each reference simulation. ... We include a shape data set (Sha) that features multiple randomized moving rigid shapes, a video data set (Vid) consisting of frames from random video footage, and TID2013 (Ponomarenko et al., 2015) as a perceptual image data set (TID). ... For the former, we make use of the Scalar Flow data set (Eckert et al., 2019), which consists of captured velocities of buoyant scalar transport flows. Additionally, we include velocity data from the Johns Hopkins Turbulence Database (JHTDB) (Perlman et al., 2007). ... As a third case, we use scalar temperature and geopotential fields from the Weather Bench repository (Rasp et al., 2020).
Dataset Splits No The paper mentions training and test data sets, and uses "validation" in the context of evaluating performance on certain datasets (e.g., Table 1 shows "validation and test data sets"), but it does not provide specific numerical splits (e.g., percentages or counts) for training, validation, and test portions of the data that would be needed for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not list specific version numbers for software dependencies or libraries used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Unless otherwise noted, networks were trained with a batch size of 1 for 40 epochs with the Adam optimizer using a learning rate of 10 5.