Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces
Authors: Li Sun, Junda Ye, Hao Peng, Feiyang Wang, Philip S. Yu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the benchmark datasets show the superiority of Rie Grace, and additionally, we investigate on how curvature changes over the graph sequence. |
| Researcher Affiliation | Academia | Li Sun1 , Junda Ye2, Hao Peng3 , Feiyang Wang2, Philip S. Yu4 1School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China 2School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China 3Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing 100191, China 4Department of Computer Science, University of Illinois at Chicago, IL, USA |
| Pseudocode | Yes | Algorithm 1: Rie Grace. Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We choose five benchmark datasets, i.e., Cora and Citeseer (Sen et al. 2008), Actor (Tang et al. 2009), ogbn-ar Xiv (Mikolov et al. 2013) and Reddit (Hamilton, Ying, and Leskovec 2017). The setting of graph sequence (task continuum) on Cora, Citerseer, Actor and ogbn-ar Xiv follows Zhang, Song, and Tao (2022), and the setting on Reddit follows Zhou and Cao (2021). |
| Dataset Splits | Yes | Definition 1 (Graph Sequence). The sequence of tasks in graph continual learning is described as a graph sequence G = {G1, . . . , GT }, and each graph Gt corresponds to a task Tt. Each task contains a training node set Vtr t and a testing node set Vte t with node features Xtr t and Xte t . Additionally, 'The grid search is performed for hyperparameters, e.g., learning rate: [0.001, 0.005, 0.008, 0.01]' implies a validation process, typically using a validation set derived from the training data, consistent with standard benchmark usage referenced. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions general software components like 'GCN' (referencing Kipf and Welling (2017)) but does not provide specific version numbers for programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The grid search is performed for hyperparameters, e.g., learning rate: [0.001, 0.005, 0.008, 0.01]. In our model, we stack the convolutional layer twice with a 2-layer Curv Net. Balance weight λ = 1. |