Adversarial Graph Augmentation to Improve Graph Contrastive Learning

Authors: Susheel Suresh, Pan Li, Cong Hao, Jennifer Neville

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally validate AD-GCL2 by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to 14% in unsupervised, 6% in transfer, and 3% in semi-supervised learning settings overall with 18 different benchmark datasets for the tasks of molecule property regression and classification, and social network classification.
Researcher Affiliation Collaboration Susheel Suresh Purdue University suresh43@purdue.edu Pan Li Purdue University panli@purdue.edu Cong Hao Georgia Tech callie.hao@gatech.edu Jennifer Neville Purdue University and Microsoft Research jenneville@microsoft.com
Pseudocode Yes (See Appendix D for a summary of AD-GCL in its algorithmic form.)
Open Source Code Yes 2https://github.com/susheels/adgcl
Open Datasets Yes We use datasets from Open Graph Benchmark (OGB) [52], TU Dataset [73] and ZINC [74] for graph-level property classification and regression.
Dataset Splits Yes AD-GCL-OPT assumes the augmentation search space has some weak information from the downstream task. A full range of analysis on how λreg impacts AD-GCL will be investigated in Sec. 5.2. We compare AD-GCL with three unsupervised/selfsupervised learning baselines for graph-level tasks, which include randomly initialized untrained GIN (RU-GIN) [72], Info Graph [18] and Graph CL [24].
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using GIN as the encoder and implies standard machine learning frameworks but does not list specific software dependencies with version numbers.
Experiment Setup Yes We consider two types of AD-GCL, where one is with a fixed regularization weight λreg = 5 (Eq.8), termed AD-GCL-FIX, and another is with λreg tuned over the validation set among {0.1, 0.3, 0.5, 1.0, 2.0, 5.0, 10.0}, termed AD-GCL-OPT.