Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Air Quality Prediction with Physics-Guided Dual Neural ODEs in Open Systems

Authors: jindong tian, Yuxuan Liang, Ronghui Xu, Peng Chen, Chenjuan Guo, Aoying Zhou, Lujia Pan, Zhongwen Rao, Bin Yang

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that Air-Dual ODE achieves state-of-the-art performance in predicting pollutant concentrations across various spatial scales, thereby offering a promising solution for real-world air quality challenges. The code is available at: https://github.com/decisionintelligence/Air-Dual ODE.
Researcher Affiliation Collaboration 1East China Normal University, 2 Huawei Noah s Ark Lab 3Hong Kong University of Science and Technology (Guangzhou)
Pseudocode No The paper describes the methodology, including equations and a model overview figure, but it does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at: https://github.com/decisionintelligence/Air-Dual ODE.
Open Datasets Yes We evaluate the performance of our model using two real-world air quality datasets: the Beijing1 dataset and the Know Air2 dataset. ... 1https://www.biendata.xyz/competition/kdd_2018/ 2https://github.com/shuowang-ai/PM2.5-GNN
Dataset Splits Yes The Beijing dataset is divided chronologically in a 7:1:2 ratio for training, validation, and testing. ... Unlike the Beijing dataset, the Know Air dataset is divided chronologically in a 2:1:1 ratio due to its ample four-year data span (Wang et al., 2020).
Hardware Specification Yes All experiments are conducted using Py Torch 2.3.0 and executed on an NVIDIA Ge Force RTX 3090 GPU, utilizing the Adam optimizer.
Software Dependencies Yes All experiments are conducted using Py Torch 2.3.0 and executed on an NVIDIA Ge Force RTX 3090 GPU, utilizing the Adam optimizer.
Experiment Setup Yes The batch size is set to 32, and the initial learning rate is 0.005, which decays at specific intervals with a decay rate of 0.1. A GRU-based RNN encoder is employed for the Coefficient Estimator and the encoders of both the Physics Dynamics and Data-Driven Dynamics. For the ODE solver, we adopt the dopri5 numerical integration method in combination with the adjoint method (Chen et al., 2018). For Dynamics Fusion, λ1 and λ2 are set to 1 and 0.8, respectively, to differentiate distinct dynamics, and the number of GNN layers is set to 3. The solver s relative tolerance (rtol) and absolute tolerance (atol) are set to 1e-3.