Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case

Authors: Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we provide numerical experiments to demonstrate the validity of our analysis and the effectiveness of the proposed learning algorithm for GNNs. [...] Section 5 shows the numerical results
Researcher Affiliation Collaboration 1Dept. of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, NY, USA 2MIT-IBM Waston AI Lab, Cambridge, MA, USA 3IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA.
Pseudocode Yes Algorithm 1 Accelerated Gradient Descent Algorithm with Tensor Initialization
Open Source Code No The paper does not provide any links or explicit statements about releasing open-source code for the described methodology.
Open Datasets No We verify our results on synthetic graph-structured data. [...] The feature vectors {xn}N n=1 are randomly generated from the standard Gaussian distribution N(0, Id d).
Dataset Splits No The paper mentions "Partition Ωinto T = log(1/ε) disjoint subsets, denoted as {Ωt}T t=1;" for the algorithm, but does not specify a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes Input: X, yn n Ω, A, the step size η, the momentum constant β, and the error tolerance ε; [...] The dimension d of the feature vectors is chosen as 10, and the sample size |Ω| is chosen as 2000. [...] The initialization is randomly selected from W (0) W (0) W F / W F < 0.5 to reduce the computation.