Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Bayesian Neighborhood Adaptation for Graph Neural Networks

Authors: Paribesh Regmi, Rui Li, Kishan K C

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on benchmark homophilic and heterophilic datasets show that the proposed method is compatible with state-of-the-art GNN variants, achieving competitive or superior performance on the node classification task, and providing well-calibrated predictions.
Researcher Affiliation Collaboration Paribesh Regmi EMAIL Golisano College of Computing and Information Science Rochester Institute of Technology Rui Li EMAIL Golisano College of Computing and Information Science Rochester Institute of Technology Kishan KC EMAIL Amazon.com, Inc.
Pseudocode Yes The algorithm of our proposed framework is in Algorithm 1. Algorithm 1 Training of our proposed method Input Graph G, D, S, prior parameters α, β. Initialize Variational parameters {at, bt}T t=1 1: Draw S samples of network structures {Zs}S s=1 from q(Z, ν) 2: for s = 1, . . . , S do 3: Compute the neighborhood scope lns from Zs (see section 3.3). 4: Compute log p(D|Zs, W, G) with lns GNN layers using equation 5. 5: end for 6: Compute ELBO using equation 8 . 7: Update {at, bt}lns t=1and {W}lns t=1 using backpropagation.
Open Source Code No The text does not contain a clear, affirmative statement of code release or a direct link to a source code repository for the methodology described in this paper.
Open Datasets Yes We use the publicly available datasets for experimentation which includes the three homophilic citation graphs: Citeseer, Cora & Pubmed and four heterophilic graphs: Chameleon, Cornell, Texas, and Wisconsin. ... We evaluate the models on three large graphs: Flickr (Zeng et al., 2020), ogb-arxiv & ogb-proteins (Hu et al., 2020).
Dataset Splits Yes We used the standard fixed split for the homophilic graphs as introduced in (Yang et al., 2016). For heterophilic datasets, we adopt the 3:1:1 split for the train, validation and test sets respectively as in (Luan et al., 2022). ... For experimental evaluation, we randomly split the interactions into training (70%), validation (10%), and testing (20%) sets.
Hardware Specification Yes The experiments are carried out on NVIDIA A100-PCIE-40GB and NVIDIA RTX A5000 GPUs.
Software Dependencies No The paper does not provide specific version numbers for key software components or libraries used in the implementation of the experiments.
Experiment Setup Yes The general setup for the experiments (unless mentioned otherwise) including the width of hidden layers (O), learning rate (lr), and activation function (act) are detailed in Table 7. ... Table 7: General hyperparameter setup O 128 epochs 500 patience 100 lr 1e-2 dropout 0.5 act Re LU optimizer Adam