Stability and Generalization of lp-Regularized Stochastic Learning for GCN

Authors: Shiyu Liu, Linsen Wei, Shaogao Lv, Ming Li

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct multiple empirical experiments to validate our theoretical findings. We conduct several numerical experiments to illustrate the superiority of our method to traditional smooth-based GCN, and we also observe some sparse solutions through our experiments as p is sufficiently close to 1.
Researcher Affiliation Academia 1University of Electronic Science and Technology of China, China 2School of Astronautics, Northwestern Polytechnical University, China 3Department of Statistics and Data Science, Nanjing Audit University, China 4Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, China
Pseudocode No The paper describes the algorithmic steps and equations (e.g., equations 8 and 9) but does not provide a formally labeled pseudocode block or algorithm.
Open Source Code No The paper does not contain any explicit statement about open-sourcing the code or providing a link to a code repository.
Open Datasets Yes We conduct experiments on three citation network datasets: Citeseer, Cora, and Pubmed [Sen et al., 2008].
Dataset Splits No The paper mentions 'training set D' and describes how D_i is generated for evaluation of generalization gap ('altering it with a different random point'), but it does not specify standard train/validation/test splits (e.g., percentages or counts) or reference predefined splits for reproducibility.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Training settings. For each experiment, we initialize the parameters of GCN models with the same random seeds and then train all models for a maximum of 200 epochs using the proposed Inexact Proximal SGD. We repeat the experiments 10 times and report the average performance as well as the standard variance. For all methods, the hyperparameters are tuned from the following search space: (1) learning rate: {1, 0.5, 0.1, 0.05}; (2) weight decay: 0; (3) dropout rate: {0.3, 0.5}; (4) regularization parameter λ is set to 0.001.