Knowledge Distillation Improves Graph Structure Augmentation for Graph Neural Networks

Authors: Lirong Wu, Haitao Lin, Yufei Huang, Stan Z. Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For three popular graph augmentation methods, namely GAUG, MH-Aug, and Graph Aug, the experimental results show that the learned student models outperform their vanilla implementations by an average accuracy of 4.6% (GAUG), 4.2% (MH-Aug), and 4.6% (Graph Aug) on eight graph datasets.
Researcher Affiliation Academia Lirong Wu 1,2, Haitao Lin 1,2, Yufei Huang 1,2, and Stan Z. Li 1 1 AI Lab, School of Engineering, Westlake University 2 College of Computer Science and Technology, Zhejiang University {wulirong,linhaitao,huangyufei,stan.zq.li}@westlake.edu.cn
Pseudocode No The paper describes the mathematical formulations and steps for its methods but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at: https://github.com/Lirong Wu/KDGA.
Open Datasets Yes The effectiveness of the proposed KDGA framework is evaluated on eight datasets. We use two commonly used homophily graph datasets, namely Cora [38] and Citeseer [12] as well as six heterophily graph datasets: Cornell, Texas, Wisconsin, Aactor [42], Chameleon and Squirrel [37].
Dataset Splits No The paper states using a 'semi-supervised node classification task where only a subset of node VL with corresponding labels YL are known', but it does not provide explicit training, validation, or test split percentages or sample counts in the main text. It mentions deferring implementation details and hyperparameter settings to Appendix B and supplementary material.
Hardware Specification No No specific hardware details such as GPU models, CPU types, or memory amounts used for experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8') are provided in the paper.
Experiment Setup No The paper states that 'implementation details and the best hyperparameter settings for each dataset' are deferred to Appendix B and supplementary material, and 'parameter sensitivity w.r.t two key hyperparameters: fusion factor α and loss weight κ' is placed in Appendix C. No specific hyperparameter values or training configurations are detailed in the main text.