DRGCN: Dynamic Evolving Initial Residual for Deep Graph Convolutional Networks
Authors: Lei Zhang, Xiaodong Yan, Jianshan He, Ruopeng Li, Wei Chu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show that our model effectively relieves the problem of over-smoothing in deep GCNs and outperforms the state-of-the-art (SOTA) methods on various benchmark datasets. |
| Researcher Affiliation | Industry | Ant Group, Beijing, China |
| Pseudocode | Yes | In addition, the complete Full-Batch and Mini-Batch pseudocodes are provided in the Appendix. |
| Open Source Code | Yes | Our reproducible code is available on Git Hub. |
| Open Datasets | Yes | First, we use three standard citation network datasets Cora, Citeseer, and Pubmed (Sen et al. 2008) for semi-supervised node classification. Then, we conduct the experiments on the Node Property Prediction of Open Graph Benchmark(Hu et al. 2020)...1https://ogb.stanford.edu/docs/leader nodeprop/#ogbn-arxiv |
| Dataset Splits | Yes | In the experiments, we apply the standard fixed validation and testing split (Kipf and Welling 2017) on three citation datasets with 500 nodes for validation and 1,000 nodes for testing. In addition, in training set sizes experiments, we conduct experiments with different training set sizes {140,500,750,1000,1250} on the Cora dataset for several representative baselines. Then, according to the performance of the training set sizes experiments, we adopted fixed training set sizes for model depths experiments, which are 1000 for Cora, 1600 for Citeseer, and 10000 for Pubmed. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments, only general mentions of 'GPU memory'. |
| Software Dependencies | No | The paper mentions the Adam SGD optimizer but does not list specific software dependencies with version numbers for libraries, frameworks, or languages used in the implementation or experimentation. |
| Experiment Setup | Yes | We use the Adam SGD optimizer (Kingma and Ba 2014) with a learning rate of 0.001 and early stopping with the patience of 500 epochs to train DRGCN and DRGCN*. We set L2 regularization of the convolutional layer, fully connected layer, and evolving layer to 0.01, 0.0005, and 0.005 respectively. |