Elastic Graph Neural Networks
Authors: Xiaorui Liu, Wei Jin, Yao Ma, Yaxin Li, Hua Liu, Yiqi Wang, Ming Yan, Jiliang Tang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on semi-supervised learning tasks demonstrate that the proposed Elastic GNNs obtain better adaptivity on benchmark datasets and are significantly robust to graph adversarial attacks. [...] In this section, we conduct experiments to validate the effectiveness of the proposed Elastic GNNs. We first introduce the experimental settings. Then we assess the performance of Elastic GNNs and investigate the benefits of introducing ℓ1-based graph smoothing into GNNs with semi-supervised learning tasks under normal and adversarial settings. In the ablation study, we validate the local adaptive smoothness, sparsity pattern, and convergence of EMP. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Michigan State University, USA 2School of Mathematics, Shandong University, China 3Department of Computational Mathematics, Science and Engineering, Michigan State University, USA. |
| Pseudocode | Yes | Figure 1. Elastic Message Passing (EMP). F0 = Xin and Z0 = 0m d. |
| Open Source Code | Yes | The implementation of Elastic GNNs is available at https: //github.com/lxiaorui/Elastic GNN. |
| Open Datasets | Yes | We conduct experiments on 8 real-world datasets including three citation graphs, i.e., Cora, Citeseer, Pubmed (Sen et al., 2008), two co-authorship graphs, i.e., Coauthor CS and Coauthor Physics (Shchur et al., 2018), two co-purchase graphs, i.e., Amazon Computers and Amazon Photo (Shchur et al., 2018), and one blog graph, i.e., Polblogs (Adamic & Glance, 2005). [...] The data statistics for the benchmark datasets used in Section 4.2 are summarized in Table 5 in Appendix A. |
| Dataset Splits | Yes | The data statistics for the benchmark datasets used in Section 4.2 are summarized in Table 5 in Appendix A. The data statistics for the adversarially attacked graph used in Section 4.3 are summarized in Table 6. [...] We randomly split 10%/10%/80% of nodes for training, validation and test. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Deep Robust (Li et al., 2020)3, a Py Torch library for adversarial attacks and defenses' but does not specify PyTorch's version or any other software dependencies with explicit version numbers. |
| Experiment Setup | Yes | For all models, we use 2 layer neural networks with 64 hidden units. [...] hyperparameters are tuned from the following search space: 1) learning rate: {0.05, 0.01, 0.005}; 2) weight decay: {5e-4, 5e-5, 5e-6}; 3) dropout rate: {0.5, 0.8}. For APPNP, the propagation step K is tuned from {5, 10} and the parameter α is tuned from {0, 0.1, 0.2, 0.3, 0.5, 0.8, 1.0}. For Elastic GNNs, the propagation step K is tuned from {5, 10} and parameters λ1 and λ2 are tuned from {0, 3, 6, 9}. As suggested by Theorem 1, we set γ = 1 1+λ2 and β = 1 2γ in the proposed elastic message passing scheme. Adam optimizer (Kingma & Ba, 2014) is used in all experiments. |