Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Stochastic Block Model-Aware Topological Neural Networks for Graph Link Prediction

Authors: Yuzhou Chen, Xiao Guo, Shujie Ma

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments for link prediction on both graphs and knowledge graphs show that SBM-TNN achieves state-of-the-art performance over a set of popular baseline methods. Extensive experiments on benchmark datasets clearly show that SBM-TNN delivers state-of-the-art link prediction and knowledge graph completion tasks with a significant margin. (Abstract) Also, Section 6 is titled 'Experiments' and contains subsections for 'Datasets and Baselines', 'Experiment Settings', and 'Experiment Results' with performance tables.
Researcher Affiliation Academia Yuzhou Chen EMAIL Department of Statistics University of California, Riverside; Xiao Guo EMAIL School of Mathematics Northwest University Xi an; Shujie Ma EMAIL Department of Statistics University of California, Riverside. All listed affiliations are academic institutions.
Pseudocode No The paper describes the methods in detail using mathematical equations and textual descriptions (e.g., in sections 3 and 4), but does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes Code and data are publicly available at https://github.com/yuzhouguangc/SBM-TNN.
Open Datasets Yes We experiment on 2 types of networks for link prediction (i) citation networks: Cora-ML, Citeseer, and Pub Med (Sen et al., 2008) and (ii) graphs related to Amazon shopping records: Photo and Computers (Shchur et al., 2018). For knowledge graph completion tasks, we conduct experiments on 3 well-known KG datasets including (i) FB15k-237 (Toutanova et al., 2015; Toutanova & Chen, 2015), WN18RR Dettmers et al. (2018), and NELL-995 (Xiong et al., 2017). Code and data are publicly available at https://github.com/yuzhouguangc/SBM-TNN.
Dataset Splits Yes For link prediction tasks, we randomly split edges into 85%/5%/10% for training, validation, and testing, and we evaluate link prediction using the ROC-AUC score on the test set. For KG completion tasks, we follow the settings in previous works (Vashishth et al., 2019; Schlichtkrull et al., 2018), i.e., triplets in these datasets are randomly split into training, validation, and test sets respectively, and we evaluate the KG completion performance by using Mean Reciprocal Rank (MRR) and Hits@N (here we consider N {1, 3, 10}).
Hardware Specification Yes We implement our proposed SBM-TNN with Pytorch framework on two NVIDIA RTX A5000 GPUs with 24 Gi B RAM.
Software Dependencies No The paper mentions 'Pytorch framework' but does not specify a version number for it or any other key software dependencies.
Experiment Setup Yes For link prediction, we perform an extensive grid search for learning rate among {0.001, 0.005, 0.008, 0.01, 0.1}, the dropout rate among {0.1, 0.2, . . . , 0.9}, the number of hidden units among {8, 16, 32, 64, 128}, and the model is trained for 5,000 epochs with early stopping applied when the metric (i.e., validation loss) starts to drop. For KG completion, we set the batch size to be 512 and the model is trained for 500 epochs, and we perform an extensive grid search for learning rate among {0.00001, 0.001, 0.01, 0.1}.