Loss Balancing for Fair Supervised Learning

Authors: Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Then, we support our theoretical results through several empirical studies. We conduct experiments on two real-world datasets to evaluate the performance of the proposed algorithm.
Researcher Affiliation Collaboration 1Yahoo Research, NYC, NY. The author also holds a visiting professor position at The Ohio State University, Columbus, OH. 2The Ohio State University, Columbus, OH. 3Optum Labs, London, UK. The work was done while at Alan Turing Institute, London, UK.
Pseudocode Yes Algorithm 1 Function ELminimizer Input: w G0,w G1, ϵ, γ Parameters: λ(0) start = L0(w G0), λ(0) end = L0(w G1), i = 0 Define L1(w) = L1(w) + γ
Open Source Code Yes The codes are available at https://github.com/ Khalili Mahdi/Loss_Balancing_ICML2023.
Open Datasets Yes In the first experiment, we use the law school admission dataset, which includes the information of 21,790 law students studying in 163 different law schools across the United States (Wightman, 1998). We consider the adult income dataset containing the information of 48,842 individuals (Kohavi, 1996).
Dataset Splits No The paper specifies a random split of 70% for training and 30% for testing for both datasets, but it does not explicitly mention a separate validation set split.
Hardware Specification Yes In our experiments, we used a system with the following configurations: 24 GB of RAM, 2 cores of P100-16GB GPU, and 2 cores of Intel Xeon CPU@2.3 GHz processor.
Software Dependencies No The paper mentions using CVXPY and Pytorch but does not provide specific version numbers for these or other software dependencies in the main text or appendix beyond stating that 'you need to install packages in requirements.txt' without listing the versions there.
Experiment Setup Yes We set the penalty parameter t = 0.1 and increase this penalty coefficient by a factor of 2 every 100 iteration. We use the default parameters of Adam optimization in Pytorch. We set the penalty parameter t = 0.1 and increase this penalty coefficient by a factor of 2 every 100 iteration. [...] a learning rate of 0.005, and a batch size of 100. We use Algorithm 2 and Algorithm 3 with ϵ = 0.01 to find the optimal linear regression model under EL and adopt CVXPY python library [...] Note that 0.002 ||w||2 2 is the regularizer.