Aligning Relational Learning with Lipschitz Fairness
Authors: Yaning Jia, Chunhui Zhang, Soroush Vosoughi
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally validate the Lipschitz bound s effectiveness in limiting biases of the model output. |
| Researcher Affiliation | Academia | Yaning Jia , Chunhui Zhang , Soroush Vosoughi Dartmouth College, Hanover, NH, USA HUST, Hubei, China |
| Pseudocode | Yes | Algorithm 1 Jaco Lip: Simplified Py Torch-style Pseudocode for Lipschitz Bounds in Fairness-Oriented GNN Training |
| Open Source Code | Yes | Our code has been released at https://github.com/chunhuizng/lipschitz-fairness. |
| Open Datasets | Yes | We conduct experiments on six real-world datasets commonly used in prior work on rank-based individual fairness (Dong et al., 2021). These include one citation network (ACM (Tang et al., 2008)) and two co-authorship networks (Co-author-CS and Co-author-Phy (Shchur et al., 2018)) for node classification, and three social networks (Blog Catalog (Tang & Liu, 2009), Flickr (Huang et al., 2017), and Facebook (Leskovec & Mcauley, 2012)) for link prediction. |
| Dataset Splits | Yes | We adhere to the public train/val/test splits from Dong et al. (2021). |
| Hardware Specification | No | The paper mentions 'GPU memory usage' in Table 8, but it does not specify any particular GPU models (e.g., NVIDIA A100, RTX 2080 Ti), CPU models, or other specific hardware components used for the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam' optimizer, but it does not provide specific version numbers for these or any other software dependencies, which are necessary for reproducible software descriptions. |
| Experiment Setup | Yes | The learning rate is set at 0.01 for all tasks. For models based on GCN and SGC, we use two layers with 16 hidden units each. For GAE-based models, we employ three graph convolutional layers, with the first two layers having 32 and 16 hidden units, respectively. Adam is used as the optimizer (Kingma & Ba, 2015). Further details, including code, dataset splits, and hyperparameter settings, are available in Appendix D. |