Multi-Sender Persuasion: A Computational Perspective
Authors: Safwan Hossain, Tonghan Wang, Tao Lin, Yiling Chen, David C. Parkes, Haifeng Xu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Broadly, our theoretical and empirical contributions are of interest to a large class of economic problems. |
| Researcher Affiliation | Academia | 1Harvard University 2University of Chicago. Correspondence to: Safwan Hossain, Tonghan Wang, Tao Lin <{shossain, twang1, tlin}@g.harvard.edu>. |
| Pseudocode | No | The paper describes algorithmic steps (e.g., extra-gradient updates) but does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | No | The paper describes generating synthetic problems and scenarios, but does not provide concrete access information (link, DOI, repository, or citation) for a publicly available or open dataset. |
| Dataset Splits | No | The paper mentions collecting a dataset and training networks but does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) needed to reproduce the data partitioning into train/validation/test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer (Kingma & Ba, 2014) but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For each problem instance, we collect a dataset comprising 50,000 randomly selected samples and train the networks for 30 epochs using the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.01. For extra-gradient, we initiate the optimization process from a set of 300 random starting points. For each starting point, we run 20 iterations of extra-gradient updates with the Adam optimizer and a learning rate of 0.1. |