IGLU: Efficient GCN Training via Lazy Updates

Authors: S Deepak Narayanan, Aditya Sinha, Prateek Jain, Purushottam Kar, SUNDARARAJAN SELLAMANICKAM

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Benchmark experiments show that IGLU offers up to 1.2% better accuracy despite requiring up to 88% less compute. Section 4, 'Empirical Evaluation', details datasets, baselines, and results, including 'Test accuracies are reported in Table 1 and convergence plots are shown in Figure 2', confirming empirical studies.
Researcher Affiliation Collaboration S Deepak Narayanan & Aditya Sinha Microsoft Research India {sdeepaknarayanan1,adityaasinha28}@gmail.com, Prateek Jain Microsoft Research India prajain@google.com, Purushottam Kar IIT Kanpur & Microsoft Research India purushot@cse.iitk.ac.in, Sundararajan Sellamanickam Microsoft Research India ssrajan@microsoft.com
Pseudocode Yes Algorithm 1 IGLU: backprop order and Algorithm 2 IGLU: inverted order
Open Source Code Yes An implementation of IGLU can be found at the following URL https://github.com/sdeepaknarayanan/iglu
Open Datasets Yes The following five benchmark tasks were used: (1) Reddit (Hamilton et al., 2017): (2) PPI-Large (Hamilton et al., 2017): (3) Flickr (Zeng et al., 2020): (4) OGBN-Arxiv (Hu et al., 2020): and (5) OGBN-Proteins (Hu et al., 2020):
Dataset Splits Yes Training-validation-test splits and metrics were used in a manner consistent with the original release of the datasets: specifically ROC-AUC was used for OGBN-Proteins and micro-F1 for all other datasets. Dataset descriptions and statistics are presented in Appendix B. Table 8 provides details on the benchmark node classification datasets used in the experiments. Train/Val/Test PPI-Large 0.79/0.11/0/10 Reddit 0.66/0.10/0.24 Flickr 0.5/0.25/0.25 OGBN-Proteins 0.65/0.16/0.19 OGBN-Arxiv 0.54/0.18/0.28
Hardware Specification Yes We implement IGLU in Tensor Flow 1.15.2 and perform all experiments on an NVIDIA V100 GPU (32 GB Memory) and Intel Xeon CPU processor (2.6 GHz).
Software Dependencies Yes We implement IGLU in Tensor Flow 1.15.2 and perform all experiments on an NVIDIA V100 GPU (32 GB Memory) and Intel Xeon CPU processor (2.6 GHz).
Experiment Setup Yes Model Selection and Hyperparameter Tuning. Model selection was done for all methods based on their validation set performance. For IGLU, Graph SAGE and VR-GCN, an exhaustive grid search was done over general hyperparameters such as batch size, learning rate and dropout rate (Srivastava et al., 2014). In addition, method-specific hyperparameter sweeps were also carried out that are detailed in Appendix A.4. IGLU : Learning Rate {0.01, 0.001} with learning rate decay schemes, Batch Size {512, 2048, 4096, 10000}, Dropout {0.0, 0.2, 0.5, 0.7}