Decoupled Self-supervised Learning for Graphs

Authors: Teng Xiao, Zhengyu Chen, Zhimeng Guo, Zeyang Zhuang, Suhang Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various types of graph benchmarks demonstrate that our proposed framework can achieve better performance compared with competitive baselines.
Researcher Affiliation Academia 1The Pennsylvania State University, 2Zhejiang University, 3Tongji University
Pseudocode Yes Our full algorithm and network are provided in Appendix B. Algorithm 1: Decoupled Self-supervised Learning (DSSL) Algorithm
Open Source Code No The paper self-assesses 'Yes' to including code, data, and instructions in supplemental material or as a URL, but no explicit statement or link confirming open-source code availability for the methodology described in the paper was found within the main paper content or its appendices.
Open Datasets Yes We perform experiments on widely-used homophilic graph datasets: Cora, Citeseer, and Pubmed [42], as well as non-homophilic datasets: Texas, Cornell, Wisconsin [37], Penn94 and Twitch.
Dataset Splits Yes For datasets, we adopt the similar random split with a train/validation/test split ratio of 48/32/20% for the training of downstream linear classifier following [37].
Hardware Specification No No specific hardware details (such as GPU/CPU models, memory, or detailed computer specifications) used for running the experiments were provided in the paper.
Software Dependencies No The paper mentions software components like GCN, Adam, Gumbel-Softmax estimator, and K-means, but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes We select the best configuration of hyper-parameters based on accuracy on the validation. The detailed settings are given in Appendix E.2.