Towards Effective and General Graph Unlearning via Mutual Evolution

Authors: Xunkai Li, Yulin Zhao, Zhengyu Wu, Wentao Zhang, Rong-Hua Li, Guoren Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 9 graph benchmark datasets demonstrate the superior performance of MEGU in addressing unlearning requirements at the feature, node, and edge levels. Specifically, MEGU achieves average performance improvements of 2.7%, 2.5%, and 3.2% across these three levels of unlearning tasks when compared to state-of-the-art baselines.
Researcher Affiliation Academia 1Beijing Institute of Technology, Beijing, China 2Shenzhen Institute of Technology, Shenzhen, China 3Shandong University, Shandong, China 4Peking University, Beijing, China 5National Engineering Labratory for Big Data Analytics and Applications, Beijing, China
Pseudocode Yes Algorithm 1: Adaptive HIN Selection
Open Source Code Yes For a comprehensive overview of datasets and baselines, please refer to (Li et al. 2023)A.1. ... Li, X.; Zhao, Y.; Wu, Z.; Zhang, W.; Li, R.-H.; and Wang, G. 2023. MEGU Technical Report. In https://github.com/xk Li Allen/MEGU.
Open Datasets Yes Datasets. We split all datasets following the guidelines of recent GU approaches (Cheng et al. 2023; Wu et al. 2023), which randomly split nodes into 80% for training and 20% for testing. For a comprehensive overview of datasets and baselines, please refer to (Li et al. 2023)A.1.
Dataset Splits No The paper mentions splitting nodes into 80% for training and 20% for testing, but does not explicitly state a validation split percentage or details.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using 'GCN as the backbone' but does not specify version numbers for any software dependencies or libraries.
Experiment Setup No The paper states that 'we customize the training epochs for each GU strategy to their respective optimal values' and mentions '30-50 epochs' for MEGU, but does not provide specific values for hyperparameters such as learning rate, batch size, or optimizer settings.