Alternative Microfoundations for Strategic Classification

Authors: Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Inspired by smoothed analysis and empirical observations, noisy response incorporates imperfection in the agent responses, which we show mitigates the limitations of standard microfoundations. Our model retains analytical tractability, leads to more robust insights about stable points, and imposes a lower social burden at optimality. ... In fact, we show via simulations that a larger variance of the noise in the manipulation target leads to more conservative optimal thresholds. ... Figure 1. Convergence of retraining algorithm in a 1d-setting for different values of p with ϵ = 10 2. The population consists of 105 individuals. Half of the individuals are sampled from x N (1,0.33) with true label 1 and the other half is sampled from x N (0,0.33) with true label 0.
Researcher Affiliation Academia Meena Jagadeesan 1 Celestine Mendler-Dünner 1 Moritz Hardt 1 1University of California, Berkeley. Correspondence to: Meena Jagadeesan <mjagadeesan@berkeley.edu>.
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any specific links or explicit statements about the release of source code for the described methodology.
Open Datasets No The paper uses synthetic data for its simulations, as described in Figure 1: "The population consists of 105 individuals. Half of the individuals are sampled from x N (1,0.33) with true label 1 and the other half is sampled from x N (0,0.33) with true label 0." It does not provide access information for a publicly available dataset for its own experiments.
Dataset Splits No The paper does not specify explicit training, validation, or test splits for the synthetic data used in its simulations.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch x.x) required to replicate the experiments.
Experiment Setup Yes Figure 1. Convergence of retraining algorithm in a 1d-setting for different values of p with ϵ = 10 2. The population consists of 105 individuals. Half of the individuals are sampled from x N (1,0.33) with true label 1 and the other half is sampled from x N (0,0.33) with true label 0. θ0 PS and θ1 PS are defined as in Proposition 2 for standard microfoundations (and similarly for noisy response). The parameter of the noisy responses (NR) in (b) is taken to be σ2 = 0.1.