Prometheus: Out-of-distribution Fluid Dynamics Modeling with Disentangled Graph ODE

Authors: Hao Wu, Huiyuan Wang, Kun Wang, Weiyan Wang, Changan Ye, Yangyu Tao, Chong Chen, Xian-Sheng Hua, Xiao Luo

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we propose a new large-scale dataset Prometheus which simulates tunnel and pool fires across various environmental conditions and builds an extensive benchmark of 13 baselines, which demonstrates that the OOD generalization performance is far from satisfactory. ... Experiments validate the effectiveness of DGODE compared with state-of-the-art approaches.
Researcher Affiliation Collaboration 1Machine Learning Platform Department, Tencent 2University of Science and Technology of China 3The Center for Health AI and Synthesis of Evidence (CHASE), University of Pennsylvania 4 Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman School of Medicine 5Terminus Group 6University of California, Los Angeles.
Pseudocode Yes Algorithm 1 DGODE Framework for OOD Fluid Dynamics Modeling
Open Source Code No The paper only provides a link for the Prometheus dataset: 'Our Prometheus dataset can be found at the following link: https://huggingface.co/datasets/easylearning/Prometheus.' There is no explicit statement or link for the open-source code of the methodology described in the paper.
Open Datasets Yes In this work, we first build a large-scale OOD fluid dynamics dataset Prometheus using extensive fire simulations. ... Our Prometheus dataset can be found at the following link: https://huggingface.co/datasets/easylearning/Prometheus.
Dataset Splits Yes Table 5. Details for benchmarks. ... VALIDATION SET SIZE 2000 ... VALIDATION SET SIZE 1000
Hardware Specification Yes To ensure fairness, all methods train on a single NVIDIA-A100 using the ADAM optimizer for MSE loss over 500 epochs, with an initial learning rate of 10 3.
Software Dependencies No The paper mentions using 'ADAM optimizer' and 'MSE loss' but does not provide specific version numbers for software dependencies such as deep learning frameworks or programming languages.
Experiment Setup Yes To ensure fairness, all methods train on a single NVIDIA-A100 using the ADAM optimizer for MSE loss over 500 epochs, with an initial learning rate of 10 3. We set the batch size to 20.