Adversarial Attacks on Fairness of Graph Neural Networks

Authors: Binchi Zhang, Yushun Dong, Chen Chen, Yada Zhu, Minnan Luo, Jundong Li

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental study demonstrates that G-Fair Attack successfully corrupts the fairness of different types of GNNs while keeping the attack unnoticeable. Our study on fairness attacks sheds light on potential vulnerabilities in fairness-aware GNNs and guides further research on the robustness of GNNs in terms of fairness. Experimental Evaluation. We conduct extensive experiments on three real-world datasets with four types of victim models and verify that our proposed G-Fair Attack successfully jeopardizes the fairness of various fairness-aware GNNs with an unnoticeable effect on prediction utility.
Researcher Affiliation Collaboration Binchi Zhang1, Yushun Dong1, Chen Chen1, Yada Zhu2, Minnan Luo3, Jundong Li1 1University of Virginia 2IBM Research 3Xi an Jiaotong University
Pseudocode Yes With rt(u, v), the pseudocode of our proposed attack algorithm is shown in Appendix C.1.
Open Source Code Yes The open-source code is available at https://github.com/zhangbinchi/G-Fair Attack.
Open Datasets Yes We adopt three prevalent real-world datasets, i.e., Facebook (Leskovec & Mcauley, 2012), Credit (Agarwal et al., 2021), and Pokec (Dai & Wang, 2021; Dong et al., 2022a) to test the effectiveness of G-Fair Attack. In our experiment implementation, we adopt the Py GDebias library (Dong et al., 2023a) to load these datasets.
Dataset Splits Yes Table 3: Dataset statistics. ... #Train/% #Validation/% #Test/%
Hardware Specification Yes All experiments are implemented on an Nvidia RTX A6000 GPU.
Software Dependencies Yes PyTorch == 1.11.0 torch-geometric == 2.0.4 numpy == 1.21.5 numba == 0.56.3 networkx == 2.8.4 scikit-learn == 1.1.1 scipy == 1.9.1 dgl == 0.9.1 deeprobust == 0.2.5
Experiment Setup Yes We provide the hyperparameter settings of G-Fair Attack in Table 5, and the hyperparameter settings of test GNNs in Table 4.