Learning Graph Neural Networks with Approximate Gradient Descent

Authors: Qunwei Li, Shaofeng Zou, Wenliang Zhong8438-8446

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments are further provided to validate our theoretical analysis. Experimental Results We provide numerical experiments to support and validate our theoretical analysis.
Researcher Affiliation Collaboration Qunwei Li,1 Shaofeng Zou, 2 Wenliang Zhong 1 1 Ant Group, Hangzhou, China 2 University at Buffalo, the State University of New York
Pseudocode Yes Algorithm 1: Approximate Gradient Descent for Learning GNNs
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets No We assume that the node feature matrix H Rn d is generated independently from the standard Gaussian distribution, and the corresponding output y Rn is generated from the teacher network with true parameters W and v as follows. We assume that each node feature matrix Hj Rnj d is generated independently from the standard Gaussian distribution, and the corresponding output yj R is generated from the teacher network with true parameters W and v as follows. We generate W from unit sphere with a normalized Gaussian matrix, and generate v as a standard Gaussian vector. The nodes in the graphs are probabilistically connected according to the distribution of Bernoulli(0.5).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We choose d = 2 and dout = 1, and set the variance ν to 0.04. The learning rate α is chosen as 0.1. The learning rate α is chosen as 0.005.