Transparency, Detection and Imitation in Strategic Classification

Authors: Flavia Barsotti, Ruya Gokhan Kocer, Fernando P. Santos

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We simulate the interplay between explanations shared by an Institution (e.g. a bank) and the dynamics of strategic adaptation by Individuals reacting to such feedback. Our model identifies key aspects related to strategic adaptation and the challenges that an institution could face as it attempts to provide explanations. Resorting to an agent-based approach, our model scrutinizes: i) the impact of transparency in explanations, ii) the interaction between faking behavior and detection capacity and iii) the role of behavior imitation. We find that the risks of transparent explanations are alleviated if effective methods to detect faking behaviors are in place. Furthermore, we observe that behavioral imitation as often happens across societies can alleviate malicious adaptation and contribute to accuracy, even after transparent explanations. Section 3 discusses the results of the simulation study performed by means of the proposed analytical setting. Parameters: N = 100, b = 1.0, cι = 3.0, ϵ = 0, ϕ = 1/3, α = 0, k = 0.5 (B and C). Results in A are an average over 103 runs starting from random initial conditions.
Researcher Affiliation Collaboration 1ING Analytics, Amsterdam, The Netherlands 2Institute for Advanced Study, University of Amsterdam, The Netherlands 3Delft Institute of Applied Mathematics, TU Delft, Delft, The Netherlands 4Informatics Institute, University of Amsterdam, The Netherlands {flavia.barsotti, ruya.kocer}@ing.com, f.p.santos@uva.nl
Pseudocode No No structured pseudocode or algorithm blocks (e.g., labeled "Algorithm" or "Pseudocode") were found in the paper.
Open Source Code Yes Supplementary Information available at: https://github.com/ fp-santos/strategic-classification-imitation.
Open Datasets No The paper describes a simulation study rather than using a pre-existing public dataset. It defines a mathematical model and simulates agent interactions based on specified parameters (e.g., N=100 individuals) rather than using an externally provided dataset with access information.
Dataset Splits No The paper describes a simulation study and does not refer to traditional dataset splits (training, validation, test) in terms of percentages or counts, as it generates data through simulation rather than processing a pre-existing dataset.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run the simulations.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9), which are necessary for reproducibility.
Experiment Setup Yes Parameters: N = 100, b = 1.0, cι = 3.0, ϵ = 0, ϕ = 1/3, α = 0, k = 0.5 (B and C). Results in A are an average over 103 runs starting from random initial conditions.