Multi-agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents
Authors: Qiang LI, Chung-Yiu Yau, Hoi-To Wai
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical results validate our analysis. We consider two examples of performative prediction problems to verify our theories. All experiments are conducted with Python on a server using 80 threads of an Intel Xeon 6318 CPU. Multi-agent Gaussian Mean Estimation. We aim to illustrate Proposition 1, Theorem 1 via a scalar Gaussian mean estimation problem on synthetic data. Email Spam Classification. We evaluate the performance of DSGD-GD by simulating the performative effects on a real dataset. |
| Researcher Affiliation | Academia | Qiang Li Chung-Yiu Yau Hoi-To Wai Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong, Shatin, Hong Kong SAR of China {liqiang, cyyau, htwai}@se.cuhk.edu.hk |
| Pseudocode | Yes | DSGD with Greedy Deployment (DSGD-GD) Scheme At iteration t = 0, 1, ..., for any i V , agent i updates his/her decision (θt i) by the recursion consisting of two phases (Phase 1) Zt+1 i Di(θt i) (Phase 2) θt+1 i = Pn j=1 Wijθt j γt+1 ℓ(θt i; Zt+1 i ), (5) |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is open-source or publicly available. |
| Open Datasets | Yes | Email Spam Classification. This example is a multi-agent spam classification task based on spambase, a dataset [Hopkins, 1999] with m = 4601 samples, d = 48 features. |
| Dataset Splits | Yes | Each server has access to training data from mi = 138 samples from spambase modeling the different set of users; the rest of mtrain = 1150 samples are taken as testing data. |
| Hardware Specification | Yes | All experiments are conducted with Python on a server using 80 threads of an Intel Xeon 6318 CPU. |
| Software Dependencies | No | The paper mentions |
| Experiment Setup | Yes | In our experiments, we set zi = 10, σ2 = 50 and step size for DSGD-GD as γt = a0/(a1 + t) with a0 = 50, a1 = 104. The sensitivity parameters are set as ϵi {0.4ϵavg, 0.45ϵavg, ..., 1.6ϵavg} with ϵ = ϵavg {0.01, 0.1, 1}. The servers aim to find a common spam filter classifier via (2) with β = 10−4. |