DyGRAIN: An Incremental Learning Framework for Dynamic Graphs
Authors: Seoyoon Kim, Seongjun Yun, Jaewoo Kang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on large-scale graph datasets demonstrate that our proposed method improves the performance by effectively capturing pivotal nodes and preventing catastrophic forgetting. |
| Researcher Affiliation | Collaboration | 1LG AI Research 2Department of Computer Science and Engineering, Korea University seoyoon.kim@lgresearch.ai, {ysj5419, kangj}@korea.ac.kr |
| Pseudocode | Yes | Algorithm 1 Training Procedure of Dy GRAIN |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We use Open Graph Benchmark (OGB) Arixv, Products [Hu et al., 2020], Reddit [Hamilton et al., 2017] and Pub Med [Sen et al., 2008]. |
| Dataset Splits | No | To simulate the dynamic scenario, we split the dataset into several incremental blocks. The paper does not specify exact percentages or absolute sample counts for training, validation, and test splits needed for reproducibility, beyond implying a test set. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU/CPU models, memory, or specific computing resources used for the experiments. |
| Software Dependencies | No | The paper mentions that 'Implementation details are described in Appendix B', but Appendix B is not provided in the main text, and no specific software names with version numbers are listed. |
| Experiment Setup | No | The paper mentions that 'Implementation details are described in Appendix B', but Appendix B is not provided in the main text, and no specific hyperparameter values or detailed training configurations are given in the main body of the paper. |