Network Effects in Performative Prediction Games

Authors: Xiaolu Wang, Chung-Yiu Yau, Hoi To Wai

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical illustrations on the network effects in Multi-PP games corroborate our findings. Numerical Illustration. We examine the network effects on the multi-agent logistic regression game via simulating the SG-GD algorithm.
Researcher Affiliation Academia 1Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China.
Pseudocode Yes Algorithm 1 Stochastic Gradient Greedy Deployment 1: Input: θ0 i for i [n], step size γt > 0 for t 1. 2: for t = 0, 1, . . . do 3: Deploy the models {θt i}n i=1 at the population. 4: for i = 1 to n do {executed in parallel} 5: Sample Zt+1 i Di(θt i, θt Ni) 6: Set gt = ℓi(θt i; Zt+1 i )+ρi Pn j=1 Aij θt i θt j 7: Set θt+1 i = θt i γt+1gt 8: end for 9: end for
Open Source Code No No statement about open-source code release found.
Open Datasets Yes Finally, we validate the results of this paper in a semirealistic setting by sampling from a Kaggle dataset (Give Me Some Credit).
Dataset Splits No Similar to Bellet et al. (2018), each agent holds a training dataset of size 1 Si 100 and a testing dataset of size 100.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for experiments were mentioned.
Software Dependencies No No specific software dependencies with version numbers were mentioned.
Experiment Setup Yes From Figure 4, we observe that while enabling graph regularization (with ρ = 1) allows the agents to maintain a high accuracy in classification in general ( ε {0, 0.1}), under large distribution shifts ( ε = 10) of negative samples, it may lead to degraded performance. logistic regression problem with an ℓ2-regularization λ2 θi 2 and λ = 10 4.