LSPAN: Spectrally Localized Augmentation for Graph Consistency Learning

Authors: Heng-Kai Zhang, Yi-Ge Zhang, Zhi Zhou, Yu-Feng Li

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical evaluation on real-world datasets clearly shows the performance gain of spectrally localized augmentation, as well as its good convergence and efficiency compared to existing graph methods. In this section, we give a comprehensive evaluation of the LSPAN method, including the prediction results, convergence analysis and the ablation study.
Researcher Affiliation Academia Heng-Kai Zhang , Yi-Ge Zhang , Zhi Zhou and Yu-Feng Li National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China {zhanghk,zhangyg,zhouz,liyf}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 Augmentation Phase of LSPAN Input: Original graph G = (X, A), eigenvectors of graph Laplacian {ui}N i=1, parameters m and n, temperature T Output: Augmented graph G 1: Obtain the adjacency matrix: A = A. 2: Compute the summation of eigenvectors: u = (um + um+1 + um+n 1). 3: Generate the augmented feature matrix: X = [X ; Tu ]. 4: return G = (X , A )
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We perform evaluations on six publicly available benchmarks across four domains: i) citation networks, including CORA and CITESEER [Kipf and Welling, 2017]; ii) protein-protein interactions, including PPI [Hamilton et al., 2017]; iii) social networks, including BLOGCATALOG and FLICKR [Huang et al., 2017]; iv) air traffic, including AIRUSA [Wu et al., 2019]. Statistics and splits of them are summarized in Appendix C.1.
Dataset Splits Yes We follow the standard semi-supervised graph learning procedure [Kipf and Welling, 2017; Veliˇckovi c et al., 2018]. The setup and implementation details of LSPAN can be found in Appendix C.3. Datasets. We perform evaluations on six publicly available benchmarks across four domains: i) citation networks, including CORA and CITESEER [Kipf and Welling, 2017]; ii) protein-protein interactions, including PPI [Hamilton et al., 2017]; iii) social networks, including BLOGCATALOG and FLICKR [Huang et al., 2017]; iv) air traffic, including AIRUSA [Wu et al., 2019]. Statistics and splits of them are summarized in Appendix C.1.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. It only mentions 'The setup and implementation details of LSPAN can be found in Appendix C.3.'
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Specifically, given an input graph G with feature matrix X and adjacency matrix A, we first generate S augmented graphs by Equation (6) where T, n are set as hyper-parameters and we randomly choose m from 1 to N n + 1 for each augmentation. The setup and implementation details of LSPAN can be found in Appendix C.3.