Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity

Authors: Mucong Ding, Tahseen Rabbani, Bang An, Evan Wang, Furong Huang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on large-graph benchmarks demonstrate the scalability and competitive performance of our Sketch-GNNs versus their full-size GNN counterparts.
Researcher Affiliation Academia Department of Computer Science, University of Maryland {mcding, trabbani, bangan, furongh}@cs.umd.edu
Pseudocode Yes We generalize Sketch-GNN to more GNN models in Appendix D and the pseudo-code which outlines the complete workflow of Sketch-GNN can be find in Appendix E.
Open Source Code Yes Our code will be made publicly available at https://github.com/SketchGNN/SketchGNN.
Open Datasets Yes We test on two small graph benchmarks including Cora, Citeseer and several large graph benchmarks including ogbn-arxiv (169K nodes, 1.2M edges), Reddit (233K nodes, 11.6M edges), and ogbn-products (2.4M nodes, 61.9M edges) from [20, 45].
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix H.
Hardware Specification Yes All experiments are conducted on an AWS EC2 instance with 192GB RAM and 8 NVIDIA A100 GPUs.
Software Dependencies No Our implementation is based on PyTorch Geometric (PyG) [20] and DGL [49]. The software dependencies can be found in our Github repo.
Experiment Setup Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix H.