On Self-Distilling Graph Neural Network
Authors: Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments |
| Researcher Affiliation | Collaboration | Yuzhao Chen1,2, , Yatao Bian2, , Xi Xiao1,3, , Yu Rong2 , Tingyang Xu2 , Junzhou Huang2,4 1Tsinghua University 2Tencent AI Lab 3Peng Cheng Laboratory 4University of Texas at Arlington |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the release of source code. |
| Open Datasets | Yes | Node Classification Table 2 summarizes the results of GNNs with various depths on Cora, Citeseer and Pub Med [2008]. Graph Classification Table 3 summarizes the results of various popular GNNs on the graph kernel classification datasets, including ENZYMES, DD, and PROTEINS in TU dataset [2016]. |
| Dataset Splits | Yes | We follow the setting of semi-supervised learning, using the fixed 20 nodes per class for training. [...] each experiment is conducted by 10-fold cross validation with the splits ratio at 8:1:1 for training, validating and testing. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | For our method, the hyper-parameters of α and β are both set to 0.01 and γ is 0. [...] The hyper-parameters of γ is fixed to 0 for node classification, and we determine α and β via a simple grid search. Details are provided in Appendix. [...] Hyper-parameter settings are deferred to Appendix. |