Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs
Authors: Georg Kohl, Li-Wei Chen, Nils Thuerey
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | All metrics were evaluated on the volumetric data from Sec. 4, which contain a wide range of test sets that differ strongly from the training data. |
| Researcher Affiliation | Academia | Georg Kohl, Li-Wei Chen, Nils Thuerey Technical University of Munich {georg.kohl, liwei.chen, nils.thuerey}@tum.de |
| Pseudocode | No | The paper describes methods and a network architecture using figures and text but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code, datasets, and ready-to-use models are available at https://github.com/tum-pbs/VOLSIM. |
| Open Datasets | Yes | Our source code, datasets, and ready-to-use models are available at https://github.com/tum-pbs/VOLSIM. Furthermore, we use adjusted versions of the noise integration for two test sets, by adding noise to the density instead of the velocity in the Advection-Diffusion model (Adv D) and overlaying background noise in the liquid simulation (Liq N). We create seven test sets via method [B]. Four come from the Johns Hopkins Turbulence Database JHTDB (Perlman et al. 2007) that contains a large amount of direct numerical simulation (DNS) data... One additional test set (SF) via temporal translations is based on Scalar Flow (Eckert, Um, and Thuerey 2019)... |
| Dataset Splits | Yes | The corresponding validation sets are generated with a separate set of random seeds. |
| Hardware Specification | No | The paper discusses memory limitations for training but does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used for running experiments. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' but does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | The final metric model was trained with the Adam optimizer with a learning rate of 10^-4 for 30 epochs via early stopping. |