Self-Organization Preserved Graph Structure Learning with Principle of Relevant Information
Authors: Qingyun Sun, Jianxin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superior effectiveness and robustness of PRI-GSL. We evaluate PRI-GSL on node classification and graph denoising tasks to verify its capability of improving the effectiveness and robustness of graph representation learning. Then we provide the analyses of the PRI loss, the structural role encodings, and the learned structure. |
| Researcher Affiliation | Academia | 1 Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing 100191, China 2 School of Computer Science and Engineering, Beihang University, Beijing 100191, China 3 Department of Computer Science, University of Illinois at Chicago, Chicago, USA |
| Pseudocode | Yes | Algorithm 1: The overall process of PRI-GSL for node classification |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | The evaluation datasets are Squirrel, Chameleon (Rozemberczki, Allen, and Sarkar 2021), Actor (Pei et al. 2020), Cite Seer, Pub Med, Cora (Sen et al. 2008) and Photo (Shchur et al. 2018). |
| Dataset Splits | Yes | We set the number of nodes in each class to be 20/30 for training/validation and take the remaining nodes for the test. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a '2-layer GCN for node classification' and 'Chebyshev polynomial approximation', but does not provide specific version numbers for any software libraries, frameworks, or dependencies. |
| Experiment Setup | Yes | For the GNN encoders, we use a 2-layer GCN for node classification. We set the representation dimension d=32, the Chebyshev polynomial order K=10, the number of time points T=4, the number of scales M=2, and the number of heads m=4. The other hyper-parameters (α, β, and γ) are tuned for each dataset. |