Fair Representation Learning through Implicit Path Alignment
Authors: Changjian Shui, Qi Chen, Jiaqi Li, Boyu Wang, Christian Gagné
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We further analyze the error gap of the implicit approach and empirically validate the proposed method in both classification and regression settings. Experimental results show the consistently better trade-off in prediction performance and fairness measurement. |
| Researcher Affiliation | Academia | 1Universit e Laval, Qu ebec, Canada 2University of Western Ontario, Ontario, Canada 3Canada CIFAR AI Chair, Mila. |
| Pseudocode | Yes | Proposed algorithm Based on the key elements, the proposed algorithm is shown in Algo. 1. Algorithm 1 Implicit Path Alignment Algorithm |
| Open Source Code | No | The paper does not provide any specific links to source code repositories or explicit statements about the release of their implementation code. |
| Open Datasets | Yes | The toxic comments dataset (Jigsaw, 2018) is a binary classification task in NLP... The Celeb A dataset (Liu et al., 2015) contains around 200K images... The Law Dataset is a regression task... (Wightman, 1998)... The National Longitudinal Survey of Youth (NLSY, 2021) dataset is a regression task... |
| Dataset Splits | Yes | We split the training, validation and testing set as 70%, 10% and 20%... We randomly select around 82K and 18K images as the training and validation set. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used (e.g., GPU models, CPU types, or cloud instance specifications) for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'Py Torch code' in Appendix G.1, but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We adopt Adam optimizer with learning rate 10 3 and eps 10 3. The batch-size is set as 500 for each sub-group and we use sampling with replacement to run the explicit algorithm with maximum epoch 100. The fair coefficient is generally set as κ = 0.1 0.001. As for the inner-optimization step, the iteration number is 20 and the iteration in running conjugate gradient approach is 10. |