Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Topology-aware Neural Flux Prediction Guided by Physics
Authors: Haoyang Jiang, Jindong Wang, Xingquan Zhu, Yi He
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations on two real-world directed graph data, namely, water flux network and urban traffic flow network, demonstrate the effectiveness of our proposal. ... Experimental results demonstrate that Phy NFP enhances GNN performance by improving sensitivity to directional dependencies and high-frequency dynamics. ... Table 1 presents the MSE and directional sensitivity scores (DS and RDS) for different models on river and traffic networks. |
| Researcher Affiliation | Academia | 1Department of Data Science, William & Mary, Williamsburg, VA, USA 2College of Engineering & Computer Science, Florida Atlantic University, Boca Raton, FL, USA. Correspondence to: Dr. Yi He <EMAIL>. |
| Pseudocode | No | The paper describes the proposed method using mathematical equations and prose but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code for this paper is available at https://github.com/ Haoyang Jiang-WM/Physics NFP. |
| Open Datasets | Yes | Datasets. Two datasets collected from real-world directed graphs are used. 1) River, preprocessed from Lama H-CE2 (Klingler et al., 2021), which documents historical discharge and meteorological measurements with hourly resolution in the Danube river network. ... 2) Traffic, preprocessed from PEMS-04 (Yu et al., 2018), that comprises traffic flow records collected from roadside sensor stations. |
| Dataset Splits | No | We set W = 24 for training and n = 6 for the lead time prediction for applicability. ... In Figure 2(b), (c), and (d), the blue line indicates the mean prediction over the test set, while the gray area represents the 3σ confidence interval. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | No | We set W = 24 for training and n = 6 for the lead time prediction for applicability. ... The model starts with an initial t = 0.7. ... We normalize all physical variables including the nodal features and output volume to the same scale in an element-wise fashion using standard score (Le Cun et al., 2002). |