Dissecting the Diffusion Process in Linear Graph Convolutional Networks

Authors: Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that our proposed DGC improves linear GCNs by a large margin and makes them competitive with many modern variants of non-linear GCNs. (from Abstract) and In this section, we conduct a comprehensive analysis on DGC and compare it against both linear and non-linear GCN variants on a collection of benchmark datasets.
Researcher Affiliation Academia 1 School of Mathematical Sciences, Peking University, Beijing, China 2 Key Lab. of Machine Perception, School of Artificial Intelligence, Peking University, Beijing, China 3 Institute for Artificial Intelligence, Peking University, Beijing, China 4 Pazhou Lab, Guangzhou, China
Pseudocode No The paper presents equations for its proposed methods (DGC-Euler, DGC-RK) and summarizes propagation rules in Table 1, but does not include a structured pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/yifeiwang77/DGC.
Open Datasets Yes For semi-supervised node classification, we use three standard citation networks, Cora, Citeseer, and Pubmed [18] and Reddit networks [5].
Dataset Splits Yes For fully-supervised node classification, we randomly split the nodes into 60%, 20%, 20% for training, validation and testing. For semi-supervised node classification, we use the standard split, i.e., 20 labels per class for training, 500 labels for validation and 1000 labels for testing.
Hardware Specification Yes Table 5: Comparison of explicit computation time of different training stages on the Pubmed dataset with a single NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions using 'Adam optimizer [9]' but does not provide specific software dependencies with version numbers for the overall experimental setup.
Experiment Setup Yes For DGC, we use Adam optimizer [9] with learning rate 0.01, and the training epoch is 200 for semi-supervised tasks and 500 for fully-supervised tasks. The optimal T for DGC is chosen from {0.1, 0.2, . . . , 10} on the validation set. And K is chosen from {2, 5, 10, 20, 50, 100}.