Learning on Large Graphs using Intersecting Communities

Authors: Ben Finkelshtein, Ismail Ceylan, Michael Bronstein, Ron Levie

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our methods with the following experiments: Runtime analysis (Section 6.1): We report the forward pass runtimes of ICGu-NN and GCN [30], empirically validating the theoretical advantage of the former. We further extend this analysis in Appendices F.7 and F.8. Node classification (Appendix F.1): We evaluate our method on real-world node classification datasets [43, 45, 36], observing that the model performance is competitive with standard approaches. Node classification using Subgraph SGD (Section 6.2 and Appendix F.3): We evaluate our subgraph SGD method (Section 4.3) to identify the effect of sampling on the model performance on the tolokers and Flickr datasets [43, 65]. We find the model s performance to be robust on tolokers and state-of-the-art on Flickr. Spatio-temporal tasks (Section 6.3): We evaluate ICGu-NN on real-world spatio-temporal tasks [35] and obtain competitive performance to domain-specific baselines. Comparison to graph coarsening methods (Appendix F.2): We provide an empirical comparison between ICG-NNs and a variety of graph coarsening methods on the Reddit [23] and Flickr [65] datasets, where ICG-NNs achieve state-of-the-art performance. Additional experiments: We perform an ablation study over the number of communities (Appendix F.4) and the choice of initialization in Section 4.2 (Appendix F.6). We moreover experimentally demonstrate a positive correlation between the Frobenius error and cut norm error as hinted by Theorem 3.1 (Appendix F.5), and perform a memory allocation analysis (Appendix F.9).
Researcher Affiliation Collaboration Ben Finkelshtein University of Oxford Ismail Ilkan Ceylan University of Oxford Michael Bronstein University of Oxford / AITHYRA Ron Levie Technion Israel Institute of Technology
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes All our experiments are run on a single NVidia L40 GPU. We made our codebase available online: https://github.com/benfinkelshtein/ICGNN.
Open Datasets Yes We evaluate our method on real-world node classification datasets [43, 45, 36]... We evaluate ICGu-NN on real-world spatio-temporal tasks [35]... We evaluate ICG-NN and ICGu-NN on the large graph datasets Flickr [65] and Reddit [23].
Dataset Splits Yes We segment the datasets into windows of 12 time steps and train the models to predict the subsequent 12 observations. For all datasets, these windows are divided sequentially into 70% for training, 10% for validation, and 20% for testing. We report the mean absolute error (MAE) and standard deviation averaged over the forecastings.
Hardware Specification Yes All our experiments are run on a single NVidia L40 GPU. We made our codebase available online: https://github.com/benfinkelshtein/ICGNN.
Software Dependencies No The paper mentions software components like GCN [30], DCRNN [35], Graph Wave Net [60], AGCRN [8], Adam optimizer, and GRU, but does not provide specific version numbers for any of them.
Experiment Setup Yes Additionally, we use the Adam optimizer and detail all hyperparameters in Appendix I. ... In Tables 10 to 12, we report the hyper-parameters used in our real-world node-classification, spatio-temporal and graph coarsening benchmarks.