An Efficient Memory Module for Graph Few-Shot Class-Incremental Learning
Authors: Dong Li, Aijia Zhang, Junqi Gao, Biqing Qi
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We analyze the effectiveness of Mecoin in terms of generalization error and explore the impact of different distillation strategies on model performance through experiments and VC-dimension analysis. Compared to other related works, Mecoin shows superior performance in accuracy and forgetting rate. In this section, we will evaluate Mecoin through experiments and address the following research questions: Q1). Does Mecoin have advantages in the scenarios of graph few-shot continual learning? Q2). How does Me Cs improve the representativeness of class prototypes? Q3). What are the advantages of GKIM over other distillation methods? |
| Researcher Affiliation | Academia | Dong Li2,3, Aijia Zhang4, Junqi Gao4, Biqing Qi1,2 1 Department of Electronic Engineering, Tsinghua University, 2 Shanghai Artificial Intelligence Laboratory, 3 Institute for Advanced Study in Mathematics, Harbin Institute of Technology, 4 School of Mathematics, Harbin Institute of Technology |
| Pseudocode | No | The paper describes its methods verbally and with equations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Our code is publicly available on the Mecoin GFSCIL. |
| Open Datasets | Yes | We assess Mecoin s performance on three real-world graph datasets: Cora Full, CS, and Computers, comprising two citation networks and one product network. Datasets are split into a Base set for GNN pretraining and a Novel set for incremental learning. Tab.1 provides the statistics and partitions of the datasets. |
| Dataset Splits | No | The paper describes training and test sets but does not explicitly mention a separate validation set or split. |
| Hardware Specification | Yes | CPU information:24 v CPU AMD EPYC 7642 48-Core Processor GPU information:RTX A6000(48GB) |
| Software Dependencies | No | The paper mentions using GNN and GAT models, and GCN as backbone, but does not provide specific version numbers for any software libraries or dependencies (e.g., PyTorch, TensorFlow, or relevant graph deep learning libraries). |
| Experiment Setup | Yes | Training parameters are set at 2000 epochs and a learning rate of 0.005. Table 7: Datasets Backbone Hidden dim Epoch Learning rate weight decay Dropout prototype dim Cora Full GCN 128 2000 0.0005 0 0.5 14 GAT 64 2000 0.0005 0 0.5 14 CS GCN 128 2000 0.0005 0 0.5 14 GAT 16 2000 0.0005 0 0.5 8 Computers GCN 128 2000 0.0005 0 0.5 14 GAT 16 2000 0.0005 0 0.5 8 |