Universal Graph Convolutional Networks
Authors: Di Jin, Zhizhi Yu, Cuiying Huo, Rui Wang, Xiao Wang, Dongxiao He, Jiawei Han
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of U-GCN over state-of-the-arts. Extensive experiments on a series of benchmark datasets demonstrate the superiority of U-GCN over some state-of-the-arts. |
| Researcher Affiliation | Academia | 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, China 3Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL, USA |
| Pseudocode | No | The paper describes the proposed methods using mathematical formulations and textual descriptions, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code and data are available at https://github.com/jindi-tju. |
| Open Datasets | Yes | We adopt eight public network datasets with edge homophily ratio α ranging from strong homophily to strong heterophily, as shown in Table 1, to evaluate the performance of different methods. We use three citation networks Cora, Cite Seer and Pub Med [19, 25], two Wikipedia networks Chameleon and Squirrel [24], and three webpage networks1 Cornell, Wisconsin and Texas. |
| Dataset Splits | Yes | For all methods, we set the dropout rate to 0.6 and use the same splits for training, validation and testing sets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware specifications (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers (Adam) and activation functions (ReLU, Sigmoid, Leaky ReLU), but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or library versions). |
| Experiment Setup | Yes | For all methods, we set the dropout rate to 0.6 and use the same splits for training, validation and testing sets. We run 5 times with the same partition and report the average results. We employ the Adam optimizer with the learning rate setting to 0.005 and apply early stopping with a patience of 20. In addition, we set the number of attention heads to 8, weight decay {5e 3, 5e 4}, and k {3...7} for k-nearest neighbor network. |