Rethinking Graph Regularization for Graph Neural Networks

Authors: Han Yang, Kaili Ma, James Cheng4573-4581

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated P-reg on node classification, graph classification and graph regression tasks. We report the results of P-reg here using Cross Entropy as φ if not specified, and the results of different choices of the φ function are reported in Appendix F due to the limited space. In addition to the node classification task for both random splits of 7 graph datasets (Yang, Cohen, and Salakhutdinov 2016; Mc Auley et al. 2015; Shchur et al. 2019) and the standard split of 3 graph datasets reported in Table 1 and 2, we also evaluated P-reg on graph-level tasks on the OGB dataset (Hu et al. 2020) in Table 3.
Researcher Affiliation Academia Han Yang, Kaili Ma, James Cheng The Chinese University of Hong Kong hyang@cse.cuhk.edu.hk, klma@cse.cuhk.edu.hk, jcheng@cse.cuhk.edu.hk
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to its own source code (e.g., a specific repository link or an explicit statement of code release) for the methodology described.
Open Datasets Yes We evaluated P-reg on node classification, graph classification and graph regression tasks. ... In addition to the node classification task for both random splits of 7 graph datasets (Yang, Cohen, and Salakhutdinov 2016; Mc Auley et al. 2015; Shchur et al. 2019) and the standard split of 3 graph datasets reported in Table 1 and 2, we also evaluated P-reg on graph-level tasks on the OGB dataset (Hu et al. 2020) in Table 3.
Dataset Splits Yes The train/validation/test split of all the 7 datasets are 20 nodes/30 nodes/all the remaining nodes per class, as recommended by Shchur et al. (2019).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No Our implementation is based on Py Torch (Paszke et al. 2019) and we used the Adam (Kingma and Ba 2014) optimizer with learning rate equal to 0.01 to train all the models. No specific version number for PyTorch is given.
Experiment Setup Yes Our implementation is based on Py Torch (Paszke et al. 2019) and we used the Adam (Kingma and Ba 2014) optimizer with learning rate equal to 0.01 to train all the models. Additional details about the experiment setup are given in Appendix E.