Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks
Authors: Yimeng Min, Frederik Wenkel, Guy Wolf
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We establish the advantages of the presented Scattering GCN with both theoretical results establishing the complementary benefits of scattering and GCN features, as well as experimental results showing the benefits of our method compared to leading graph neural networks for semi-supervised node classification |
| Researcher Affiliation | Academia | Yimeng Min Mila Quebec AI Institute Montreal, QC, Canada minyimen@mila.quebec Frederik Wenkel Dept. of Math. and Stat. Université de Montréal Mila Quebec AI Institute Montreal, QC, Canada frederik.wenkel@umontreal.ca Guy Wolf Dept. of Math. and Stat. Université de Montréal Mila Quebec AI Institute Montreal, QC, Canada guy.wolf@umontreal.ca |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to its own source code, nor does it explicitly state that the code is available. |
| Open Datasets | Yes | Our comparisons are based on four popular graph datasets with varying sizes and connectivity structures summarized in Tab. 1 (see, e.g., [30] for Citeseer, Cora, and Pubmed, and [17] for DBLP). |
| Dataset Splits | Yes | These are tuned and evaluated using the standard splits provided for the benchmark datasets for fair comparison. |
| Hardware Specification | No | The paper states 'runtime measured for all methods on the same hardware', but does not provide any specific hardware details such as GPU/CPU models, memory, or processing power. |
| Software Dependencies | No | The paper mentions using 'original implementations accompanying their publications' for competitor methods but does not provide specific version numbers for any software dependencies used in their own work. |
| Experiment Setup | No | The paper mentions that tuning was done via grid search for hyperparameters and composition, and refers to the supplement for further details, but does not provide specific hyperparameter values or system-level training settings in the main text. |