Deceptive Fairness Attacks on Graphs via Meta Learning
Authors: Jian Kang, Yinglong Xia, Ross Maciejewski, Jiebo Luo, Hanghang Tong
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification. The experimental results demonstrate that FATE could amplify the bias of graph neural networks with or without fairness consideration while maintaining the utility on the downstream task. |
| Researcher Affiliation | Collaboration | 1University of Rochester, {jian.kang@, jluo@cs.}rochester.edu 2Meta, yxia@meta.com 3Arizona State University, rmacieje@asu.edu 4University of Illinois Urbana-Champaign, htong@illinois.edu |
| Pseudocode | Yes | Appendix B presents the pseudocode of FATE. Algorithm 1 summarizes the detailed steps on fairness attack with FATE. |
| Open Source Code | Yes | Code can be found at the following repository: https://github.com/jiank2/FATE. ... the code will be publicly released under CC-BY-NC-ND license upon publication |
| Open Datasets | Yes | We use three widely-used benchmark datasets for fair graph learning: Pokec-z, Pokec-n and Bail. |
| Dataset Splits | Yes | For each dataset, we use a fixed random seed to split the dataset into training, validation and test sets with the split ratio being 50%, 25%, and 25%, respectively. |
| Hardware Specification | Yes | All experiments are performed on a Linux server with 2 Intel Xeon Gold 6240R CPUs and 4 Nvidia Tesla V100 SXM2 GPUs, each of which has 32 GB memory. |
| Software Dependencies | Yes | All codes are programmed in Python 3.8.13 and Py Torch 1.12.1. |
| Experiment Setup | Yes | Surrogate model training. ...The surrogate GCN in FA-GNN is trained for 500 epochs with a learning rate 1e-2, weight decay 5e-4, and dropout rate 0.5. For FATE, we use a 2-layer linear GCN...trained for 500 epochs with a learning rate 1e-2, weight decay 5e-4, and dropout rate 0.5. Training the victim model. ...The hidden dimension, learning rate, weight decay and dropout rate of GCN and Fair GNN are set to 128, 1e-3, 1e-5 and 0.5, respectively. |