Towards Robust Graph Incremental Learning on Evolving Graphs

Authors: Junwei Su, Difan Zou, Zijun Zhang, Chuan Wu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through comprehensive empirical studies with several benchmark datasets, we demonstrate that our proposed method, Structural-Shift-Risk Mitigation (SSRM), is flexible and easy to adapt to improve the performance of state-of-the-art GNN incremental learning frameworks in the inductive setting.
Researcher Affiliation Academia 1Department of Computer Science, University of Hong Kong 2Department of Computer Science, University of Wu Han.
Pseudocode Yes The overall learning procedure in each stage is summarized in Algorithm 1 and Fig. 6 provide a graphical illustration of the procedure.
Open Source Code Yes Implementation available at: https://github.com/littleTown93/NGIL_Evolve
Open Datasets Yes We evaluate our proposed method, SSRM, on OGB-Arxiv (Hu et al., 2020), Reddit (Hamilton et al., 2017), and Cora Full (Bojchevski & G unnemann, 2017).
Dataset Splits Yes For all the datasets, the train-validation-test splitting ratios are 60%, 20%, and 20%.
Hardware Specification Yes All the experiments of this paper are conducted on the following machine CPU: two Intel Xeon Gold 6230 2.1G, 20C/40T, 10.4GT/s, 27.5M Cache, Turbo, HT (125W) DDR4-2933 GPU: four NVIDIA Tesla V100 SXM2 32G GPU Accelerator for NV Link Memory: 256GB (8 x 32GB) RDIMM, 3200MT/s, Dual Rank OS: Ubuntu 18.04LTS
Software Dependencies No The paper mentions 'OS: Ubuntu 18.04LTS' in the hardware specifications, but does not list specific version numbers for software libraries or dependencies like Python, PyTorch, or other relevant packages used for the experiments.
Experiment Setup Yes We use α = 0.1, β = 0.5 for SSRM. Table 4 is the hyperparameter research space we adopt from (Zhang et al., 2022). Table 4. Incremental learning settings for each dataset. GEM memory strength:[0.05,0.5,5]; n memories:[10,100,1000] TWP lambda 1:[100,10000]; lambda t:[100,10000]; beta:[0.01,0.1] ER-GNN budget:[10,100]; d:[0.05,0.5,5.0]; sampler:[CM]