Rethinking and Extending the Probabilistic Inference Capacity of GNNs

Authors: Tuo Xu, Lei Zou

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real-world datasets demonstrate the effectiveness of our approaches.
Researcher Affiliation Academia Tuo Xu Wangxuan Institute of Computer Technology Peking University Beijing, China doujzc@stu.pku.edu.cnLei Zou Wangxuan Institute of Computer Technology Peking University Beijing, China zoulei@pku.edu.cn
Pseudocode Yes Algorithm 1: The 1-WL test (color refinement)Algorithm 2: The k-WL testsAlgorithm 3: The k-FWL testsAlgorithm 4: The Phantom Node Procedure
Open Source Code No The paper does not contain an explicit statement or link providing access to the source code for the described methodology.
Open Datasets Yes For node classification tasks, the experiments follow the settings in Qu et al. (2022). For fairness we have rerun the results GCN (Kipf & Welling, 2016b), SAGE (Hamilton et al., 2017) and GCNII Chen et al. (2020a) on the datasets. We train the GNNs to output 121 dimensional vectors, with each dimension corresponds to one of the labels.For link prediction tasks, the experiment configurations follow the settings in Chamberlain et al. (2023). Table 4: Statics of datasets.Dataset Task # Features # Labels # Nodes # EdgesPPI NC 50 121 56944 818716Cora LP 1433 1 2708 5278Citeseer LP 3703 1 3327 4676Pubmed LP 500 1 18717 44237
Dataset Splits Yes We random generate 70-10-20 percent train-val-test splits which is the same as Chamberlain et al. (2023).
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU model, CPU type, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like Adam optimizer, ReLU activation, GCN, SAGE, and GCNII but does not provide specific version numbers for these, nor for the main deep learning framework (e.g., PyTorch, TensorFlow).
Experiment Setup Yes We choose the Adam (Kingma & Ba, 2014) optimizer, with learning rate 5 10 3, weight decay 0.GCN: We set the numbers of hidden neurons to 128, and the number of layers to 2. We use ReLU (Nair & Hinton, 2010) as the activation function. We set dropout rate to be 0.5 and apply layer normalization.SAGE: We set the numbers of hidden neurons to 1024, and the number of layers to 2. We use ReLU (Nair & Hinton, 2010) as the activation function. We set dropout rate to be 0.5 and apply layer normalization.GCNII: We set the numbers of hidden neurons to 1024, and the number of layers to 5. We use ReLU (Nair & Hinton, 2010) as the activation function. We set dropout rate to be 0.5 and do not apply layer normalization.