Multi-Receiver Online Bayesian Persuasion
Authors: Matteo Castiglioni, Alberto Marchesi, Andrea Celli, Nicola Gatti
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our goal is to design no-regret algorithms for the sender with polynomial per-iteration running time. First, we prove a negative result: for any 0 < α 1, there is no polynomial-time no-α-regret algorithm when the sender s utility function is supermodular or anonymous. Then, we focus on the case of submodular sender s utility functions and we show that, in this case, it is possible to design a polynomial-time no1 1 e regret algorithm. To do so, we introduce a general online gradient descent scheme to handle online learning problems with a finite number of possible loss functions. |
| Researcher Affiliation | Academia | Matteo Castiglioni 1 Alberto Marchesi 1 Andrea Celli 1 Nicola Gatti 1 Politecnico di Milano, Milan, Italy. |
| Pseudocode | Yes | Algorithm 1 OGD-APO |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide links to a code repository. |
| Open Datasets | No | This is a theoretical paper focusing on algorithm design and proofs; it does not describe experiments involving datasets for training. |
| Dataset Splits | No | This is a theoretical paper; therefore, it does not discuss training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental setup or the specific hardware used. |
| Software Dependencies | No | The paper focuses on theoretical contributions and does not mention any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not include details about an experimental setup, hyperparameters, or training settings. |