Learning Revenue Maximization Using Posted Prices for Stochastic Strategic Patient Buyers
Authors: Eitan-Hai Mashiah, Idan Attias, Yishay Mansour
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We derive a sample complexity bound for the learning of an approximate optimal pure strategy, by computing the fat-shattering dimension of this setting. Moreover, we provide a general sample complexity bound for the approximate optimal mixed strategy. We also consider an online setting and derive a vanishing regret bound with respect to both the optimal pure strategy and the optimal mixed strategy. |
| Researcher Affiliation | Collaboration | Eitan-Hai Mashiah:1, Idan Attias:2, Yishay Mansour1,3 1Tel Aviv University, Israel 2Ben-Gurion University, Israel 3Google Research, Israel |
| Pseudocode | No | The paper describes algorithms (e.g., dynamic programming, linear programming) but does not present them in structured pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not include any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper is theoretical and discusses learning from 'samples' drawn from an unknown distribution D, but it does not refer to any specific publicly available datasets used for training or empirical evaluation. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not mention any specific hardware used for experiments, as it focuses on theoretical contributions. |
| Software Dependencies | No | The paper, being theoretical, does not specify any software dependencies or version numbers needed for reproducibility. |
| Experiment Setup | No | The paper is theoretical and does not include details about an experimental setup, such as hyperparameters or system-level training settings. |