Node-wise Localization of Graph Neural Networks
Authors: Zemin Liu, Yuan Fang, Chenghao Liu, Steven C.H. Hoi
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct extensive experiments on four benchmark graphs, and consistently obtain promising performance surpassing the state-of-theart GNNs. |
| Researcher Affiliation | Collaboration | Zemin Liu1 , Yuan Fang1 , Chenghao Liu2 and Steven C.H. Hoi1,2 1Singapore Management University, Singapore 2Salesforce Research Asia, Singapore |
| Pseudocode | No | The paper describes the proposed approach using mathematical formulations and descriptive text (e.g., equations 1-11) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions 'Additional implementation details and experimental settings are included in Sections A and B of the supplemental material' and 'Details of the calculation are included in Section C of the supplemental material' but does not explicitly state that source code for the methodology is released or provide a link. |
| Open Datasets | Yes | Datasets. We utilize four benchmark datasets. They include two academic citation networks, namely Cora and Citeseer [Yang et al., 2016]... A similar citation network for Wikipedia articles called Chameleon [Pei et al., 2020] is also used. Finally, we use an e-commerce co-purchasing network called Amazon [Hou et al., 2020]. |
| Dataset Splits | Yes | For all datasets, we follow the standard split in the literature [Yang et al., 2016; Kipf and Welling, 2017; Veliˇckovi c et al., 2018], which uses 20 labeled nodes per class for training, 500 nodes for validation and 1000 nodes for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or cloud instance specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as programming language versions, library versions (e.g., PyTorch, TensorFlow), or solver versions. |
| Experiment Setup | Yes | The dimension of the hidden layer defaults to 8, while we also present results using larger dimensions. The regularization of GNN parameters is set to λG = 0.0005. These settings are chosen via empirical validation, and are largely consistent with the literature [Perozzi et al., 2014; Kipf and Welling, 2017; Veliˇckovi c et al., 2018]. For our models, the additional regularizations are set to λL = λ = 1 (except for λ = 0.1 in LGAT). |