Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Uncovering the Structural Fairness in Graph Contrastive Learning
Authors: Ruijia Wang, Xiao Wang, Chuan Shi, Le Song
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various benchmarks and evaluation protocols validate the effectiveness of the proposed method. |
| Researcher Affiliation | Collaboration | Ruijia Wang1, Xiao Wang1 , Chuan Shi1 , Le Song2 1Beijing University of Posts and Telecommunications 2 Bio Map and MBZUAI EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methods in text and equations but does not include a dedicated pseudocode block or algorithm section. |
| Open Source Code | No | The paper does not contain an explicit statement or link providing access to the source code for the described methodology. |
| Open Datasets | Yes | Specifically, we choose two categories of datasets: 1) citation networks including Cora [17] and Citeseer [17], 2) social networks Photo [23] and Computer [23] from Amazon. |
| Dataset Splits | No | The paper mentions 'early stopping based on the training loss', but does not specify an explicit validation dataset split percentage or count. |
| Hardware Specification | No | The paper mentions running experiments but does not specify any particular hardware components such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions various models and frameworks (e.g., GCN, DGI, Graph CL) but does not provide specific version numbers for software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | We investigate the impact of threshold ΞΆ used to split tail nodes and head nodes on classification performance. Figure 4 (a) shows the test Micro-F1 w.r.t. different ΞΆ on Cora dataset. We perform sensitivity analysis on feature drop rate dfdr and edge drop rate dedr which control the generation of graph augmentations. |