Bootstrapping Informative Graph Augmentation via A Meta Learning Approach

Authors: Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Fuchun Sun, Changwen Zheng

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, the experiments across multiple benchmark datasets demonstrate that MEGA outperforms the state-of-the-art methods in graph self-supervised learning tasks. Further experimental studies prove the effectiveness of different terms of MEGA.
Researcher Affiliation Academia 1University of Chinese Academy of Sciences 2Institute of Software Chinese Academy of Sciences 3Tsinghua University
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our codes are available at https://github.com/hang53/MEGA.
Open Datasets Yes Datasets. We evaluate our method on twelve benchmark datasets in two major categories: 1) Social Networks: RDTM5K, IMDB-B, IMDB-M from TU Dataset [Morris et al., ]. 2) Molecules: PROTEINS, MUTAG, COLLAB and DD from TU Dataset [Morris et al., ] and molesol, mollipo, molbbbp, moltox21 and molsider from Open Graph Benchmark (OGB) [Hu et al., 2020a].
Dataset Splits Yes We followed the experimental protocol of AD-GCL, including the train/validation/test splits.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We adopt the Adam optimizer with a learning rate of 10 4 for learnable graph augmentation and a learning rate of 10 3 for graph encoding. We use 50 training epochs on all datasets.