S3GCL: Spectral, Swift, Spatial Graph Contrastive Learning
Authors: Guancheng Wan, Yijun Tian, Wenke Huang, Nitesh V Chawla, Mang Ye
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we comprehensively evaluate our proposed S3GCL by answering the main questions as follows. Q1: Superiority. Does S3GCL outperforms the existing state-of-the-art graph contrastive learning methods? Q2: Efficiency. How about the inference time efficiency of the proposed method? Q3: Effectiveness. Are proposed cosine-parameterized Chebyshev polynomial, MLP encoder, and spatial positive pairs effective? Q4: Sensitivity. What is the performance of the proposed method with different hyper-parameters? The answers of Q1-Q3 are illustrated in 4.2-4.4, and sensitivity analyses (Q4) can be found in the Appendix G. The code is available at https://github.com/Guancheng Wan/S3GCL. |
| Researcher Affiliation | Academia | 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China 2Department of Computer Science, University of Notre Dame, USA 3Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, China. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Guancheng Wan/S3GCL. |
| Open Datasets | Yes | To effectively evaluate our approach in practical scenarios, we employed 14 benchmark graph datasets of various sizes and features, including both homophilic and heterophilic graphs. Please see Appendix A for details about datasets. in Appendix A: Cora, Cite Seer, and Pub Med. These datasets are recognized as classic examples of homophilic citation networks. ... (Sen et al., 2008; Shchur et al., 2018; Hu et al., 2020). |
| Dataset Splits | Yes | We utilize a commonly used split of 60 %/20%/20% for train/validation/test sets. |
| Hardware Specification | Yes | The experiments are conducted using NVIDIA Ge Force RTX 3090 GPUs as the hardware platform, coupled with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz. |
| Software Dependencies | Yes | The deep learning framework employed was Pytorch, version 1.11.0, alongside CUDA version 11.3. |
| Experiment Setup | Yes | The hidden layer size was set to 1024 for each dataset. For optimization, Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951) was chosen, featuring a momentum of 0.9 and a weight decay of 1e 5. The learning rate was configured to 5e 4 during the training process and 1e 2 for the linear evaluation phase. |