Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Wave-driven Graph Neural Networks with Energy Dynamics for Over-smoothing Mitigation
Authors: Peihan Wu, Hongda Qi, Sirong Huang, Dongdong An, Jie Lian, Qin Zhao
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets, including Cora, Citeseer, and Pub Med, as well as real-world graphs, demonstrate that the proposed framework achieves state-of-the-art performance, effectively mitigating over-smoothing and enabling deeper, more expressive architectures. |
| Researcher Affiliation | Academia | Shanghai Engineering Research Center of Intelligent Education and Big Data, Shanghai Normal University, Shanghai, China EMAIL, hongda EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical equations and textual explanations, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/ rene0329/EWGNN/. |
| Open Datasets | Yes | To evaluate the effectiveness of the proposed wave-driven propagation mechanism, we employ several widely used citation network datasets, including 4 citation graphs: Cora, Citeseer, Pub Med [Sen et al., 2008], and ogbn-proteins [Hu et al., 2020], alongside real-world datasets such as Pokec [Takac and Zabovsky, 2012]. |
| Dataset Splits | Yes | For each dataset, we adopt the standard public split [Yang et al., 2016], allocating 20 labeled nodes per class for training, 500 nodes for validation, and 1,000 nodes for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We also investigate the performance of EWGNN under different settings of α, comparing it to other models. As shown in Figure 2a, EWGNN with α = 0.5 results in larger feature updates, leading to significant improvements in performance and achieving SOTA performance, while training to a reasonable depth. EWGNN with α = 0.1 retains more features, enabling the model to scale to deeper architectures. To further investigate the role of energy dynamics in our model, we perform an ablation study by testing two fixed values of β (0.1 and 1.0) alongside the adaptively learned β, which is adjusted based on energy dynamics. |