What Matters in Graph Class Incremental Learning? An Information Preservation Perspective
Authors: Jialu Li, Yu Wang, Pengfei Zhu, Wanyu Lin, Qinghua Hu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments Table 1: Performance comparison on Cora Full, Arxiv, and Reddit for GCIL setting. Table 3: Ablation comparisons of graph spatial information preservation. |
| Researcher Affiliation | Academia | 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Engineering Research Center of City Intelligence and Digital Governance, Ministry of Education of the People s Republic of China, Tianjin, China 3Haihe Lab of ITAI, Tianjin, China 4Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China |
| Pseudocode | Yes | Algorithm 1 Framework of GSIP |
| Open Source Code | Yes | The code is available through https://github.com/Jillian555/GSIP. |
| Open Datasets | Yes | We utilize five public datasets to evaluate the effectiveness of the proposed method in GCIL, the statistics of datasets are reported in Appendix B.1. ... Cora Full [48], ... Arxiv [49] and Reddit [50], ... Cora [51] and Citeseer [51]. |
| Dataset Splits | Yes | The train-validation-test splitting ratios are 60%, 20%, and 20% for all datasets. |
| Hardware Specification | Yes | Our model is deployed in Py Torch with an NVIDIA RTX 3090 GPU and trained with 200 epochs for every task. |
| Software Dependencies | No | Our model is deployed in Py Torch with an NVIDIA RTX 3090 GPU and trained with 200 epochs for every task. (Only mentions PyTorch without a version number.) |
| Experiment Setup | Yes | Our model is deployed in Py Torch with an NVIDIA RTX 3090 GPU and trained with 200 epochs for every task. We use Adam with weight decay for optimization, and the learning rate is set to 0.005. We use a two-layer GCN with a hidden dimension 256 as the backbone. |