Plug-in Performative Optimization
Authors: Licong Lin, Tijana Zrnic
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Figure 1 we see that the excess risk of our algorithm converges rapidly to a value that reflects the degree of misspecification. It approaches zero for s = 0 (top panel) due to no misspecification and stabilizes at a nonzero value for s > 0 (middle and bottom panels), consistent with our theory. In contrast, the risk of both Perf GD and DFO reduce slowly, while SGD quickly reaches a suboptimal value. |
| Researcher Affiliation | Academia | 1Department of Statistics, University of California, Berkeley, USA 2Stanford Data Science and Department of Statistics, Stanford University, USA. |
| Pseudocode | Yes | Algorithm 1 Plug-in performative optimization Require: distribution atlas DB, exploration strategy D, loss ℓ(z; θ), map-fitting algorithm d Map. 1: Deploy θi D, observe zi D(θi), i [n]. 2: Fit distribution map: ˆβ = d Map((θ1, z1), . . . , (θn, zn)), where ˆβ B. 3: Compute plug-in performative optimum: ˆθPO = arg minθ Θ Ez D ˆ β(θ)[ℓ(z; θ)]. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | We use the credit data set, in particular the processed version available at: https://github.com/ustunb/actionable-recourse. |
| Dataset Splits | No | The paper mentions using "5000 i.i.d. base samples" and "1500 randomly drawn data points to form the base distribution D0" but does not provide specific percentages or counts for training, validation, or test splits. It also does not refer to standard, predefined splits with citations. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments. It only discusses the experimental setup at a high level. |
| Software Dependencies | No | The paper does not explicitly list any software dependencies with specific version numbers. While it implies the use of common machine learning libraries in Python (e.g., for SGD, logistic regression), no versions are specified. |
| Experiment Setup | Yes | We choose the step size parameter c0 [10 4, 10 1], the batch size m in [1, 500], and δ [0.1, 100] via grid search. We choose the number of burn-in steps H = 10d. |