Fair Participation via Sequential Policies
Authors: Reilly Raab, Ross Boczar, Maryam Fazel, Yang Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we experimentally validate our claims in a semi-synthetic setting. Experiments We evaluate CPG against RGD and MPG in multiple semi-synthetic settings. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, University of California, Santa Cruz 2Department of Electrical and Computer Engineering, University of Washington |
| Pseudocode | No | The paper refers to an 'Algorithm' section, but the algorithm (CPG) is presented as a mathematical optimization problem (a sequential quadratic program) in equation form, not as a structured pseudocode block with numbered steps or an explicit 'Algorithm' label. |
| Open Source Code | No | The Technical Appendix and a link to the code repository associated with this paper will be made available on ar Xiv. |
| Open Datasets | Yes | Our settings derive from binary classification tasks on the American Community Survey Public Use Microdata Sample (ACS PUMS) dataset3, as introduced by Ding et al. (2021), for specific US states in 2018, or a recommendation task on movie preferences using data (Movie Lens) collected by Harper and Konstan (2015). (footnote 3 links to https://github.com/socialfoundations/folktables) |
| Dataset Splits | No | The paper mentions using datasets for semi-synthetic experiments but does not explicitly provide details about train/validation/test splits, percentages, or absolute counts for each. |
| Hardware Specification | No | The paper states, 'Each experiments run in less than 60 seconds on a typical laptop CPU.' This is a general statement and does not provide specific hardware models or detailed specifications (e.g., CPU model, GPU model, RAM). |
| Software Dependencies | No | The paper mentions 'scikit-learn' as an implementation detail for LBFGS but does not provide specific version numbers for it or any other software dependencies. |
| Experiment Setup | Yes | We use a learning rate that decays as a harmonic series: ηt = η1/t; t {1, 2, ...}. All experiments follow the same decay schedule and run for the same number of steps (i.e., 30), but the initial learning rate η1 is equal to half of the diameter of A. |