Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Graph Neural Ricci Flow: Evolving Feature from a Curvature Perspective

Authors: Jialong Chen, Bowen Deng, Zhen WANG, Chuan Chen, Zibin Zheng

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate that GNRF performs excellently on diverse datasets. ... 5 EXPERIMENT. To evaluate the model fairly, we collect a total of 14 datasets from 6 commonly used node classification benchmarks. We report 12 of these datasets in the main experiment: Cornell, Wisconsin, and Texas from Web KB used in Pei et al. (2020); Roman-Empire, Tolokers, Amazonratings, Minesweeper and Questions from Heterophilous Graph benchmark (Platonov et al., 2023); Cora Full, Cora ML, DBLP and Pubmed from Citation Full benchmark (Bojchevski & G unnemann, 2017). ... 5.1 SEMI-SUPERVISED NODE CLASSIFICATION ... Main results (Table 1). ... Ablation study (Table 2). ... Resource consumption (Table 3). ... 5.2 CURVATURE. We demonstrate whether GNRF faithfully adheres to the theoretical guidance provided in Chapter 3. We measure the curvature distribution of all edges in the graph dataset at different time points (Figure 2 above) and record the variance of these curvatures over time (Figure 2 below). ... 5.3 DIRICHLET ENERGY AND FEATURE VISUALIZATION. We present the evolution of the Dirichlet energy of GNRF with random parameters on synthetic graphs without any training. ... We use t-SNE to visualize the node features of the Roman-empire dataset (Fig. 4 right).
Researcher Affiliation Academia Jialong Chen, Bowen Deng, Zhen Wang, Chuan Chen , Zibin Zheng Sun Yat-sen University EMAIL EMAIL
Pseudocode Yes Algorithm 1 Solve GNRF with Forward difference method (Py Torch Geometric style)
Open Source Code Yes Code. An implementation is available at: https://github.com/Loong-Chan/GNRF new
Open Datasets Yes Datasets. To evaluate the model fairly, we collect a total of 14 datasets from 6 commonly used node classification benchmarks. We report 12 of these datasets in the main experiment: Cornell, Wisconsin, and Texas from Web KB used in Pei et al. (2020); Roman-Empire, Tolokers, Amazonratings, Minesweeper and Questions from Heterophilous Graph benchmark (Platonov et al., 2023); Cora Full, Cora ML, DBLP and Pubmed from Citation Full benchmark (Bojchevski & G unnemann, 2017). In addition, to verify the scalability of the model, we also introduce two larger-scale data sets: OBGN-Arxiv from Open Graph Benchmark (Hu et al., 2020) and OGBN-Year from Lim et al. (2021).
Dataset Splits Yes For all datasets, we uniformly adopted a random split strategy of 60%/20%/20% for the training, validation, and test sets.
Hardware Specification Yes Experimental Platform. Our code is implemented in Python 3.11.5, with the primary libraries being Py Torch 2.1.1, Py Torch Geometric 2.4.0, and Torchdiffeq 0.2.4. All experiments are conducted on a single NVIDIA 4090 GPU with with 40GB of VRAM.
Software Dependencies Yes Experimental Platform. Our code is implemented in Python 3.11.5, with the primary libraries being Py Torch 2.1.1, Py Torch Geometric 2.4.0, and Torchdiffeq 0.2.4.
Experiment Setup Yes Hyperparameters. We fine-tune GNRF within the hyperparameter search space, performing up to 100 trials on each dataset. The hyperparameter search space is as follows: Table 4: Hyperparameter Search Space (learning rate [10-5, 10-2] log-uniform, weight decay [10-6, 10-3] log-uniform, dropout [0.01,0.99] uniform, hidden dim {64,128,256} categorical, time [0.1,10] log-uniform)