Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization
Authors: Aseem Baranwal, Kimon Fountoulakis, Aukosh Jagannath
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we provide experiments to demonstrate our theoretical results in Section 4. To solve problem (2) we used CVX, a package for specifying and solving convex programs (Grant & Boyd, 2013; Blondel et al., 2008). Throughout the section we set R = d in (2) for all our experiments. |
| Researcher Affiliation | Academia | 1David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada 2Department of Statistics and Actuarial Science, Department of Applied Mathematics, University of Waterloo, Waterloo, Canada. |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | Yes | We use the popular real data Cora, Pub Med and Wikipedia Network. These data are publicly available and can be downloaded from (Fey & Lenssen, 2019). |
| Dataset Splits | No | The paper mentions training and testing but does not explicitly specify a validation set or its split percentages. It refers to a 'semi-supervised setting where only a fraction of the labels are available' but not a validation split. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | To solve problem (2) we used CVX, a package for specifying and solving convex programs (Grant & Boyd, 2013; Blondel et al., 2008). While a software package is mentioned, a specific version number for CVX is not provided. |
| Experiment Setup | Yes | Throughout the section we set R = d in (2) for all our experiments. For this experiment we train and test on a CSBM with p = 0.5, q = 0.1, d = 60, and n = 400 which is roughly equal to 0.85 d3/2, and each class has 200 nodes. We present results averaged over 10 trials for the training data and 10 trials for the test data. |