Strategic Classification with Unknown User Manipulations

Authors: Tosca Lechner, Ruth Urner, Shai Ben-David

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We propose a novel formalization of this batch-learning setting for strategic classification. In our framework, learning and data-generation proceed in rounds. Initially, the learner receives a labeled sample S0 from some underlying (unmanipulated) distribution. Given this sample of labeled data, the learner decides on (and publishes) a classifier h0. From then on, in each round t, the learner receives an unlabeled sample from the same data distribution, with the caveat that the features were strategically manipulated in response to ht 1. The learner then decides on classifier ht, based on labeled data sample S0 and unlabeled samples S1, S2, . . . St 1. While the learner does not have access to the underlying feature manipulation capabilities of the instances, we assume that the true manipulation structure is a member of a class of possible such structures (graphs). We show that by exploiting the observed distribution shift in this batch-learning setting it is possible to learn the optimal classifier in terms of the so-called strategic loss (Lechner & Urner, 2022) even without knowing the underlying manipulation capabilities. For a wide range of combinations of hypothesis classes H and manipulation graph classes G, we provide first positive results and analysis in this learning setup. More specifically, we derive bounds on sufficient sample sizes as well as the number of rounds for the learner to produce a classification rule with low loss. We focus in particular on graph classes which are totally ordered by inclusion, which captures the case in which it is unknown how manipulation costs compare to the value of being classified with the desired label. We show that in these cases batch-learning is possible if the VC-dimension of the lossclass of (G H) is finite.
Researcher Affiliation Collaboration 1Cheriton School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada 2Lassonde School of Engineering, EECS Department, York University, Toronto, Ontario, Canada 3Vector Institute, Toronto, Ontario, Canada.
Pseudocode Yes Algorithm 1 Strategic Batch-Learning for Thresholds; Algorithm 2 Empirical Manipulation Estimation (realizable); Algorithm 3 Strategic Batch-Learning for totally ordered graph classes
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include links to a code repository.
Open Datasets No The paper is theoretical and does not conduct experiments on specific datasets. It refers to 'samples from P' which denotes data from a theoretical distribution, not a concrete, publicly available dataset.
Dataset Splits No The paper is theoretical and does not describe empirical experiments, thus no training, validation, or test splits are mentioned.
Hardware Specification No The paper is theoretical and does not describe empirical experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe empirical experiments or software implementations, thus no specific software dependencies with version numbers are provided.
Experiment Setup No The paper is theoretical and does not describe empirical experiments, thus no experimental setup details like hyperparameters or training settings are provided.