Disentangled Contrastive Learning on Graphs

Authors: Haoyang Li, Xin Wang, Ziwei Zhang, Zehuan Yuan, Hang Li, Wenwu Zhu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our method against several state-of-the-art baselines.
Researcher Affiliation Collaboration 1Tsinghua University, 2Bytedance
Pseudocode No The paper describes the model framework and optimization details in text and diagrams, but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly provide concrete access to source code for the methodology described.
Open Datasets Yes To demonstrate the advantages of our method, we conduct experiments on nine wellknown graph classification datasets including four bioinformatics datasets, i.e., MUTAG, PTC-MR, NCI1, PROTEINS, and five social network datasets, i.e., COLLAB, IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-5K. We also adopt a larger graph dataset ogbg-molhiv from Open Graph Bench Mark (OGB) [26].
Dataset Splits Yes We adopt the 10-fold cross validation accuracy, and report the mean accuracy (%) with standard variation after five repeated runs.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using 'GIN [4] as the message-passing layers' but does not provide specific ancillary software details with version numbers (e.g., Python, PyTorch versions or library versions).
Experiment Setup Yes For a fair comparison, the hyper-parameters of the graph augmentations are kept consistent with Graph CL. ... Since the ground-truth number of the latent factors is unknown, we search the number of channels K from 1 to 10.