SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks
Authors: Bahare Fatemi, Layla El Asri, Seyed Mehran Kazemi
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A comprehensive experimental study demonstrates that SLAPS scales to large graphs with hundreds of thousands of nodes and outperforms several models that have been proposed to learn a task-specific graph structure on established benchmarks. |
| Researcher Affiliation | Collaboration | Bahare Fatemi University of British Columbia bfatemi@cs.ubc.ca Layla El Asri Borealis AI layla.elasri@borealisai.com Seyed Mehran Kazemi Google Research mehrankazemi@google.com |
| Pseudocode | No | The paper describes its methods in prose and with a diagram, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | Due to copyright issues, we are not able to share an anonymous version of our code. However, the code will be publicly released upon the acceptance of the paper. |
| Open Datasets | Yes | We use three established benchmarks in the GNN literature namely Cora, Citeseer, and Pubmed [41] as well as the ogbn-arxiv dataset [16]... Following [12, 6], we also experiment with several classification (non-graph) datasets available in scikit-learn [36] including Wine, Cancer, Digits, and 20News. Furthermore, following [20], we also provide results on MNIST [28]. |
| Dataset Splits | Yes | For Cora and Citeseer, the LDS model uses the train data for learning the parameters of the classification GCN, half of the validation for learning the parameters of the adjacency matrix... and the other half of the validation set for early stopping and tuning the other hyperparameters. Besides experimenting with the original setups of these two datasets, we also consider a setup that is closer to that of LDS: we use the train set and half of the validation set for training and the other half of validation for early stopping and hyperparameter tuning. |
| Hardware Specification | No | The paper states 'See Implementation Details in the supplementary material' for compute resources and hardware specifications, but these details are not present in the main body of the paper. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers in the main text. It defers some implementation details to the supplementary material. |
| Experiment Setup | No | The paper states 'We defer the implementation details and the best hyperparameter settings for our model on all the datasets to the supplementary material.', indicating that these specific experimental setup details are not present in the main text. |