Analysis of Corrected Graph Convolutions
Authors: Robert J. Wang, Aseem Baranwal, Kimon Fountoulakis
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we demonstrate our results empirically. For synthetic data, we show Theorems 4.1 and 4.2 for linear binary classification. For real data, we show that removing the principal component of the adjacency matrix exhibits positive effects on multi-class node classification problems as well. |
| Researcher Affiliation | Academia | Robert Wang Aseem Baranwal Kimon Fontoulakis Cheriton School of Computer Science, University of Waterloo |
| Pseudocode | No | The paper describes methods in text and mathematical formulas but does not include a dedicated |
| Open Source Code | Yes | We provide code that reproduces the experiments by just executing simple Python notebooks. |
| Open Datasets | Yes | CORA, Cite Seer, and Pubmed citation networks [Sen et al., 2008] |
| Dataset Splits | No | The paper specifies dataset parameters like node count and feature count for synthetic data, and mentions |
| Hardware Specification | No | The paper does not specify any particular hardware components (e.g., specific GPU or CPU models) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python version, specific library versions like PyTorch 1.9). |
| Experiment Setup | Yes | For synthetic data from the CSBM, ... We choose n = 2000 nodes with 20 features for each node, sampled from a Gaussian mixture. The intra-edge probability is fixed to p = O(log3 n/n). We perform linear classification to demonstrate the results in Theorem 4.1 and Theorem 4.2, training a one-layer GCN network both with and without the corrected convolutions and perform an empirical comparison. |