Uncovering the Structural Fairness in Graph Contrastive Learning

Authors: Ruijia Wang, Xiao Wang, Chuan Shi, Le Song

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various benchmarks and evaluation protocols validate the effectiveness of the proposed method.
Researcher Affiliation Collaboration Ruijia Wang1, Xiao Wang1 , Chuan Shi1 , Le Song2 1Beijing University of Posts and Telecommunications 2 Bio Map and MBZUAI {wangruijia, xiaowang, shichuan}@bupt.edu.cn, songle@biomap.com
Pseudocode No The paper describes the methods in text and equations but does not include a dedicated pseudocode block or algorithm section.
Open Source Code No The paper does not contain an explicit statement or link providing access to the source code for the described methodology.
Open Datasets Yes Specifically, we choose two categories of datasets: 1) citation networks including Cora [17] and Citeseer [17], 2) social networks Photo [23] and Computer [23] from Amazon.
Dataset Splits No The paper mentions 'early stopping based on the training loss', but does not specify an explicit validation dataset split percentage or count.
Hardware Specification No The paper mentions running experiments but does not specify any particular hardware components such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions various models and frameworks (e.g., GCN, DGI, Graph CL) but does not provide specific version numbers for software dependencies like programming languages or libraries.
Experiment Setup Yes We investigate the impact of threshold ζ used to split tail nodes and head nodes on classification performance. Figure 4 (a) shows the test Micro-F1 w.r.t. different ζ on Cora dataset. We perform sensitivity analysis on feature drop rate dfdr and edge drop rate dedr which control the generation of graph augmentations.