Learning Optimal Reserve Price against Non-myopic Bidders
Authors: Jinyan Liu, Zhiyi Huang, Xiangning Wang
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We introduce algorithms that obtain a small regret against non-myopic bidders either when the market is large, i.e., no single bidder appears in more than a small constant fraction of the rounds, or when the bidders are impatient, i.e., they discount future utility by some factor mildly bounded away from one. Our approach carefully controls what information is revealed to each bidder, and builds on techniques from differentially private online learning as well as the recent line of works on jointly differentially private algorithms. |
| Researcher Affiliation | Academia | Zhiyi Huang Jinyan Liu Xiangning Wang Department of Computer Science The University of Hong Kong {zhiyi, jyliu, xnwang}@cs.hku.hk |
| Pseudocode | Yes | Algorithm 1 Tree-aggregation; Algorithm 2 Online Pricing (Single-bidder Case); Algorithm 3 Online Pricing (Multi-bidder Case) |
| Open Source Code | No | The paper does not provide any statement or link indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | The paper is theoretical and does not describe experiments using datasets. Therefore, no information regarding dataset availability for training is provided. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical experiments. There is no mention of training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe empirical experiments. Thus, no hardware specifications used for running experiments are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe empirical experiments. Therefore, no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and proofs. It does not provide details of an experimental setup, such as hyperparameters or training configurations for empirical evaluation. |