Learning Strategy-Aware Linear Classifiers
Authors: Yiling Chen, Yang Liu, Chara Podimata
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we provide simulations to complement our theoretical analysis. Our results advance the growing literature of learning from revealed preferences, which has so far focused on smoother assumptions from the perspective of the learner and the agents respectively. In this subsection we present our simulation results10. We build simulation datasets since in order to evaluate the performance of our algorithms one needs to know the original datapoints xt. The results of our simulations are presented in Fig. 4. |
| Researcher Affiliation | Academia | Yiling Chen Harvard University yiling@seas.harvard.edu Yang Liu UC Santa Cruz yangliu@ucsc.edu Chara Podimata Harvard University podimata@g.harvard.edu |
| Pseudocode | Yes | Algorithm 2: GRINDER Algorithm for Strategic Classification |
| Open Source Code | Yes | Our code is publicly available here: https://github.com/charapod/learn-strat-class |
| Open Datasets | No | The paper states: 'We build simulation datasets since in order to evaluate the performance of our algorithms one needs to know the original datapoints xt.' and describes how these datasets are generated (e.g., 'The +1 labeled points are drawn from Gaussian distribution as xt (N(0.7, 0.3), N(0.7, 0.3)) and the -1 labeled points are drawn from xt (N(0.4, 0.3), N(0.4, 0.3)).'). It does not provide access information for a publicly available dataset, nor does it make its generated dataset publicly available. |
| Dataset Splits | No | The paper builds simulation datasets and describes data generation, but does not explicitly mention or specify training, validation, or test dataset splits (e.g., percentages or counts). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the simulations, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions training a 'logistic regression model' and comparing against 'EXP3', but it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions) that would be needed for reproducibility. |
| Experiment Setup | Yes | For the simulation, we run GRINDER against EXP3 for a horizon T = 1000, where each round was repeated for 30 repetitions. The δ-BMR agents that we used are best-responding according to the utility function of Eq. (1), and we studied 5 different values for δ: 0.05, 0.1, 0.15, 0.3, 0.5. The +1 labeled points are drawn from Gaussian distribution as xt (N(0.7, 0.3), N(0.7, 0.3)) and the -1 labeled points are drawn from xt (N(0.4, 0.3), N(0.4, 0.3)). |