Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling

Authors: Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theoretical findings are also justified through numerical experiments. ... We will focus on numerical evaluations on synthetic data where we can control target functions and compare with A explicitly.
Researcher Affiliation Collaboration 1Department of Electrical, Computer, and System Engineering, Rensselaer Polytechnic Institute, NY, USA 2Department of Computer Science and Engineering, Michigan State University, MI, USA 3MIT-IBM Watson AI Lab, IBM Research, MA, USA 4IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA 5Department of Computer Science and Engineering, University at Buffalo, NY, USA.
Pseudocode Yes Algorithm 1 Training with SGD and graph topology sampling
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets No We generate a graph G with N = 2000 nodes. ... Synthetic labels are generated based on (20) using A as A. ... The paper uses synthetic data generated for the experiments and does not provide access information (link, DOI, or citation) to this generated dataset.
Dataset Splits No We generate a graph G with N = 2000 nodes. ... A three-layer GCN as defined in (4) with m neurons in each hidden layer is trained on a randomly selected set Ωof labeled nodes. The rest N |Ω| labels are used for testing. ... The paper describes a split into training (|Ω|) and testing (N - |Ω|) sets, but does not explicitly mention or detail a separate validation split.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper describes the algorithms and theoretical framework but does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with specific versions) used for implementation.
Experiment Setup Yes The learning rate η = 10 3. The mini-batch size is 5, and the dropout rate as 0.4. The total number of iterations is TTw = 4|Ω|. Our graph topology sampling method samples S1 = 0.9N1 and S2 = 0.9N2 nodes for both groups in each iteration.