LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation
Authors: Rui Xue, Haoyu Han, Mohamadali Torkamani, Jian Pei, Xiaorui Liu
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks. The implementation of Lazy GNN is available at https: //github.com/RXPHD/Lazy_GNN. |
| Researcher Affiliation | Collaboration | 1North Carolina State University, Raleigh, US 2Michigan State University, East Lansing, US 3Amazon, US (this work does not relate to the author s position at Amazon) 4Duke University, Durham, US. |
| Pseudocode | No | The paper includes conceptual diagrams (Figure 3 and Figure 4) but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The implementation of Lazy GNN is available at https: //github.com/RXPHD/Lazy_GNN. |
| Open Datasets | Yes | We conduct experiments on multiple large-scale graph datasets including REDDIT, YELP, FLICKR, ogbn-arxiv, and ogbn-products (Hu et al., 2020). |
| Dataset Splits | Yes | We conduct experiments on multiple large-scale graph datasets including REDDIT, YELP, FLICKR, ogbn-arxiv, and ogbn-products (Hu et al., 2020). The hyperparameter tuning of baselines closely follows the setting in GNNAuto Scale (Fey et al., 2021). The convergence of validation accuracy in Figure 5 demonstrates that Lazy GNN has a comparable convergence speed with GCN (GAS) and GCNII (GAS), and is slightly faster than APPNP (GAS) in terms of the number of training epochs. |
| Hardware Specification | No | The paper mentions running experiments on CPU and GPU memory but does not specify particular models, types, or configurations of the hardware used. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | For Lazy GNN, hyperparameters are tuned from the following search space: (1) learning rate: {0.01, 0.001, 0.0001}; (2) weight decay: {0, 5e 4, 5e 5}; (3) dropout: {0.1, 0.3, 0.5, 0.7}; (4) propagation layers : L {1, 2}; (5) MLP layers: {3, 4}; (6) MLP hidden units: {256, 512}; (7) α {0.01, 0.1, 0.2, 0.5, 0.8}; (8) β and γ are simply set as 0.5 in most cases, but a further tuning can improve the performance. |