Co-Modality Graph Contrastive Learning for Imbalanced Node Classification

Authors: Yiyue Qian, Chunhui Zhang, Yiming Zhang, Qianlong Wen, Yanfang Ye, Chuxu Zhang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our model significantly outperforms state-of-the-art baseline models and learns more balanced representations on real-world graphs.
Researcher Affiliation Academia 1Department of Compute Science and Engineering, University of Notre Dame, USA 2Department of Computer Science, Brandeis University, USA 3Department of Computer and Data Sciences, Case Western Reserve University, USA
Pseudocode Yes Pseudo-code of CM-GCL is provided in Section A of the Appendix.
Open Source Code Yes Our source code is available at https://github.com/graphprojects/CM-GCL.
Open Datasets Yes In this paper, we adopt four multi-modality graph datasets from existing works, i.e., AMiner [36], Yelp Chi [33], Git Hub [29], and Instagram [30], which contain the raw content (e.g., text or image) and the graph structure information.
Dataset Splits Yes We use 70% samples for training, 10% for validation, and the remaining 20% for testing.
Hardware Specification Yes All experiments are conducted under the environment of the Ubuntu 16.04 OS, plus Intel i9-9900k CPU, two Ge Force GTX 2080 Ti Graphics Cards, and 64 GB of RAM.
Software Dependencies No The paper mentions 'Ubuntu 16.04 OS' but does not specify specific software libraries or solvers with version numbers (e.g., PyTorch, TensorFlow, scikit-learn versions).
Experiment Setup Yes With the grid search, pruning ratio e is set as 20%, the number of contrastive pairs R for each node in intra-modality GCL is different for different graphs (e.g., 5 for AMiner graph) , and the number of mini-batch n is different for different tasks, (e.g., 100 for AMiner graph). Besides, the temperature parameter τinter and τintra are set as 0.1 and the trade-off hyper-parameter λ among co-modality GCL is set as 0.5. For fine-tuning, α and γ in Lfocal for different graphs are different (e.g., (0.75, 1.0) for AMiner graph).