Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Group Fairness in Reinforcement Learning via Multi-Objective Rewards

Authors: Jack Blandin, Ian A. Kash

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we provide three experiments, where our goals are to demonstrate (i) Theorems 4.1-4.3 in action, (ii) how our reward encourages fairness (beyond just avoiding harm), and (iii) how our reward can lead to better policies than benchmarks.
Researcher Affiliation Academia Jack Blandin EMAIL Department of Computer Science University of Illinois Chicago Ian Kash EMAIL Department of Computer Science University of Illinois Chicago
Pseudocode No The paper describes its methodology and model using mathematical definitions (e.g., Definition 2.1 for MDP, equations for reward functions) and prose, but it does not include any explicitly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code Yes The supporting code for these experiments is available at https://github.com/jackblandin/research/.
Open Datasets No The paper describes a simulated environment: "We consider instantiations of the MDP defined in Section 2 and model a two-step loan application environment." It does not use or provide access to any external, publicly available datasets.
Dataset Splits No The paper uses a simulated MDP environment with "100,000 episodes" for averaging results. It does not mention traditional dataset splits like training, validation, or test sets.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, memory, or cloud computing instances.
Software Dependencies No The paper does not list specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes We set γ = 1/2 given the short length of our episodes. ... The MDP is simple enough that we can compute optimal policies for the multi-objective policies and Eq Op with linear programming. Leveraging Corollary 4.1, MMQ is computed by solving for π011. ... Eq Op tries to balance fairness with decision-maker utility by only trying to be fair to a subset of applicants. This approach is an adaptation of an Equal Opportunity constraint (Hardt et al., 2016) proposed by Wen et al. (2021). Our choices of ϵ and α follow the logic used by Wen et al. (2021). ... Parameter(s) Scen 1 Scen 2 Scen 3 Init. State P(z = i) i {0, 1} .50 .50 .50 P(x = 1) 1.0 1.0 1.0 P(yz = i | z) i {0, 1, 2} .333 .333 .333