On Learning Fairness and Accuracy on Multiple Subgroups
Authors: Changjian Shui, Gezheng Xu, Qi CHEN, Jiaqi Li, Charles X. Ling, Tal Arbel, Boyu Wang, Christian Gagné
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed framework on real-world datasets. Empirical evidence suggests the consistently improved fair predictions, as well as the comparable accuracy to the baselines. |
| Researcher Affiliation | Academia | 1Centre for Intelligent Machines, Mc Gill University 2Mila, Quebec AI Institute 3Department of Computer Science, University of Western Ontario 4Institute Intelligence and Data, Université Laval 5CIFAR AI Chair |
| Pseudocode | Yes | Algorithm 1 Fair and Informative Learning for Multiple Subgroups (FAMS) |
| Open Source Code | Yes | Code is available at https://github.com/xugezheng/FAMS. |
| Open Datasets | Yes | We adopt Amazon review dataset [55, 40]... We also use the toxic comment dataset [58]... |
| Dataset Splits | Yes | We draw and then fix 200 users from the original dataset, which includes the training, validation, and test sets. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments, such as specific GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions using "Distil BERT [57]" but does not provide specific version numbers for it or any other software dependencies like programming languages or libraries. |
| Experiment Setup | No | In the implementation, we first adopt Distil BERT [57] to learn the embedding with dimension R768. Then we adopt fw and fwa as the four-layer fully connected neural network... Additional experimental details are delegated to the Appendix. |