Fair Graph Distillation

Authors: Qizhang Feng, Zhimeng (Stephen) Jiang, Ruiquan Li, Yicheng Wang, Na Zou, Jiang Bian, Xia Hu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the proposed algorithm can achieve better prediction performance-fairness trade-offs across various datasets and GNN architectures.
Researcher Affiliation Academia Qizhang Feng1, Zhimeng Jiang1, Ruiquan Li2, Yicheng Wang1, Na Zou1, Jiang Bian3, Xia Hu4 1Texas A&M University, 2University of Science and Technology of China, 3University of Florida, 3Rice University
Pseudocode Yes Algorithm 1 Fair Graph Distillation
Open Source Code No The paper does not provide an explicit statement or link regarding the public availability of its source code.
Open Datasets Yes We use five real-world datasets, including Pokec-z, Pokec-n [Dai and Wang, 2021, Takac and Zabovsky, 2012], German, Credit, and Recidivism [Agarwal et al., 2021].
Dataset Splits No The paper does not explicitly provide specific percentages or counts for training, validation, and test dataset splits, nor does it refer to standard predefined splits for the listed datasets.
Hardware Specification Yes We encountered out-of-memory (OOM) issues when implementing GUIDE and REDRESS on an NVIDIA Ge Force RTX A5000 (24GB GPU memory)
Software Dependencies No The paper mentions software components like GNN models (GCN, SGC, Graph SAGE) and the Adam optimizer, but does not provide specific version numbers for any libraries or frameworks used.
Experiment Setup Yes Synthesizer training. We adopt Adam optimizer for synthesizer training with 0.0002 learning rate. MLPϕ consists of 3 linear layer with 128 hidden dimension. The outer loop number is 16 while the inner loop is 4 for each epoch. For each experiment, we train with a maximum of 1200 epochs and 3 independent runs. The temperature parameter γ is set to 10. X and ϕ are optimized alternatively. GNN training. We adopt Adam optimizer for GNN training with 0.005 learning rate. All GNN models are 2 layers with 256 hidden dimensions. For Pokec-z, Pokec-n, German, Credit, and Recidivism the training epochs are 1500, 1500, 4000, 1000, and 1000 respectively.