Graph-Free Knowledge Distillation for Graph Neural Networks
Authors: Xiang Deng, Zhongfei Zhang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that GFKD achieves the state-of-the-art performance for distilling knowledge from GNNs without training data. In this section, we report extensive experiments for evaluating GFKD. |
| Researcher Affiliation | Academia | Xiang Deng , Zhongfei Zhang State University of New York at Binghamton xdeng7@binghamton.edu, zhongfei@cs.binghamton.edu |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Code: https://github.com/Xiang-Deng-DL/GFKD |
| Open Datasets | Yes | We adopt six graph classification benchmark datasets [Xu et al., 2019] including three bioinformatics graph datasets, i.e., MUTAG, PTC, and PROTEINS, and three social network graph datasets, i.e., IMDB-B, COLLAB, and REDDITB. |
| Dataset Splits | No | The paper states 'On each dataset, 70% data are used for pretraining the teachers and the remaining 30% are used as the test data.' but does not specify a separate validation split or how it was handled. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU specifications). |
| Software Dependencies | No | The paper mentions 'Adam' as an optimizer but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages. |
| Experiment Setup | Yes | For generating fake graphs, we do 2500 iterations. The learning rates for structure and feature parameters are set to 1.0 (5.0 on PROTEINS, COLLAB, and REDDIT-B) and 0.01, respectively, and are divided by 10 every 1000 iterations. For KD, all the GNNs are trained for 400 epochs with Adam and the learning rate is linearly decreased from 1.0 to 0. τ is set to 2. More details are given in the Appendix. |