Strategic Classification under Unknown Personalized Manipulation

Authors: Han Shao, Avrim Blum, Omar Montasser

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the fundamental mistake bound and sample complexity in the strategic classification... We begin by providing online mistake bounds and PAC sample complexity in these scenarios for ball manipulations. We also explore non-ball manipulations and show that, even in the simplest scenario where both the original and the manipulated feature vectors are revealed, the mistake bounds and sample complexity are lower bounded by Ω(|H|) when the target function belongs to a known class H.
Researcher Affiliation Academia Han Shao Toyota Technological Institute Chicago Chicago, 60637 han@ttic.edu Avrim Blum Toyota Technological Institute Chicago Chicago, 60637 avrim@ttic.edu Omar Montasser Toyota Technological Institute Chicago Chicago, 60637 omar@ttic.edu
Pseudocode Yes Algorithm 1 Strategic Halving
Open Source Code No The paper is theoretical and focuses on mathematical proofs and algorithms; it does not mention releasing any source code or provide links to a repository.
Open Datasets No The paper is theoretical and discusses abstract "data distributions" and "agents sampled from an underlying data distribution" without referring to any specific named datasets or their public availability for empirical training.
Dataset Splits No The paper is theoretical and does not describe any empirical experiments or dataset usage that would involve validation splits.
Hardware Specification No The paper is purely theoretical and does not describe any computational experiments that would require hardware specifications.
Software Dependencies No The paper describes theoretical algorithms and proofs; it does not mention any specific software implementations or dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and bounds, not empirical experimental setups, hyperparameters, or training settings.