Certifiably Robust Graph Contrastive Learning

Authors: Minhua Lin, Teng Xiao, Enyan Dai, Xiang Zhang, Suhang Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate the effectiveness of our proposed method in providing effective certifiable robustness and enhancing the robustness of any GCL model.
Researcher Affiliation Academia Minhua Lin, Teng Xiao, Enyan Dai, Xiang Zhang, Suhang Wang The Pennsylvania State University {mfl5681,tengxiao,emd5759,xzhang,szw494}@psu.edu
Pseudocode Yes Algorithm 1 The Training Algorithm of RES.
Open Source Code Yes The source code of RES is available at https://github.com/ventr1c/RES-GCL.
Open Datasets Yes We conduct experiments on 4 public benchmark datasets for node classification, i.e., Cora, Pubmed [47], Coauthor-Physics [48] and OGB-arxiv [49], and 3 widely used dataset for graph classification i.e., MUTAG, PROTEINS [50] and OGB-molhiv [49].
Dataset Splits Yes We use public splits for Cora and Pubmed, and for five other datasets, we perform a 10/10/80 random split for training, validation, and testing, respectively.
Hardware Specification Yes All models are trained on an A6000 GPU with 48G memory.
Software Dependencies No The paper mentions 'Py GCL library [65]' and 'GCN' but does not specify version numbers for these software dependencies or other key libraries.
Experiment Setup No The paper mentions 'A 2-layer GCN is employed as the backbone GNN encoder' and 'All hyperparameters of the baselines are tuned based on the validation set', but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) in the main text.