Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Should Decision-Makers Reveal Classifiers in Online Strategic Classification?

Authors: Han Shao, Shuo Xie, Kunhe Yang

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main result shows that in this setting, the decision-maker incurs (1 γ) 1 or kin times more mistakes compared to the fullknowledge setting, where kin is the maximum in-degree of the manipulation graph (representing how many distinct feature vectors can be manipulated to appear as a single one), and γ is the discount factor indicating agents memory of past classifiers. Our results demonstrate how withholding access to the classifier can backfire and degrade the decision-maker s performance in online strategic classification. ... We provide a lower bound on the mistake bound, showing that the decision-maker will make Ω(kin) times more mistakes. We also propose an algorithm that achieves a matching upper bound. ... Additionally, we propose an algorithm with mistake bound of O((1 γ) 1 kin). We also explore the scenario where agents run online learning algorithms, and discuss the challenge of modeling learning agents through diminishing regret due to their nonstatic and large action space.
Researcher Affiliation Academia 1CMSA, Harvard University 2Toyota Technological Institute at Chicago 3University of California, Berkeley. Correspondence to: Han Shao <EMAIL>, Shuo Xie <EMAIL>, Kunhe Yang <EMAIL>.
Pseudocode Yes Algorithm 1 Reduction from Revealed-Arb to classical (non-strategic) online learning ... Algorithm 2 Conservative algorithm for γ-Weighted ... Algorithm 3 Reduction from γ-Weighted to Revealed-Arb
Open Source Code No The paper does not provide any statement about releasing source code, nor does it include links to any code repositories. The 'Impact Statement' section notes: 'This work is purely theoretical. We do not foresee any immediate societal consequences.'
Open Datasets No The paper focuses on theoretical models and bounds in strategic classification, discussing abstract concepts like 'feature vector space X' and 'manipulation graph G'. It does not describe experiments conducted on specific datasets, public or otherwise.
Dataset Splits No The paper does not conduct experiments on any specific dataset, therefore, there are no mentions of dataset splits for training, validation, or testing.
Hardware Specification No This paper is purely theoretical, focusing on algorithms, theorems, and mathematical bounds. It does not describe any experimental setup that would require hardware specifications.
Software Dependencies No This paper is purely theoretical and does not report any experimental results that would require specific software dependencies or version numbers.
Experiment Setup No The paper is theoretical, presenting algorithms and mathematical proofs rather than empirical experiments. Therefore, there are no details provided regarding experimental setup, hyperparameters, or system-level training settings.