Overcoming Catastrophic Forgetting in Graph Neural Networks

Authors: Huihui Liu, Yiding Yang, Xinchao Wang8653-8661

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate TWP on different GNN backbones over several datasets, and demonstrate that it yields performances superior to the state of the art.
Researcher Affiliation Academia Stevens Institute of Technology {hliu79, yyang99, xinchao.wang}@stevens.edu
Pseudocode No The paper does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code is publicly available at https://github.com/hhliu79/TWP.
Open Datasets Yes Node Classification. For transductive learning, we utilize two widely used datasets named Corafull (Bojchevski and G unnemann 2017) and Amazon Computers (Mc Auley et al. 2015). ... For inductive learning, we use two datasets: a protein-protein interaction (PPI) dataset (Zitnik and Leskovec 2017) and a large graph dataset of Reddit posts (Hamilton, Ying, and Leskovec 2017). ...Graph Classification. We conduct experiment on a graph classification dataset, Tox21 (Huang et al. 2014)...
Dataset Splits No The paper mentions 'training node set Vtr k and a testing node set Vte k' but does not explicitly detail validation splits or specific percentages for any dataset split, deferring further information to supplementary material.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'Adam SGD optimizer' but does not specify any programming languages, libraries, or other software dependencies with version numbers.
Experiment Setup Yes Adam SGD optimizer is used and the initial learning rate is set to 0.005 for all the datasets. The training epochs are set to 200, 200, 400, 30, and 100 for Corafull, Amazon Computers, PPI, Reddit, and Tox21 respectively. Early stop-ping is adopted for PPI and Tox21 where the patience is 10 for both of them. The regularizer hyper-parameter for EWC and MAS is always set to 10,000. The episodic memory for GEM contains all training nodes for Corafull and PPI datasets, and 100, 1,000, 100 training nodes for Amazon Computers, Reddit, and Tox21 datasets, respectively. For our method, λl is always set to 10,000, λt is selected from 100 and 10,000 for different datasets, and β is selected from 0.1 and 0.01.