Reserve Price Optimization for First Price Auctions in Display Advertising
Authors: Zhe Feng, Sebastien Lahaie, Jon Schneider, Jinchao Ye
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach over synthetic data where bidder values are drawn uniformly, and also over real bid distributions collected from the logs of the major ad exchange. Our experimental results confirm that the combination of variance reduction on both objective components leads to the fastest convergence rate. For the demand component, a simple logistic model works well over the synthetic (i.e., uniform) data, but a flexible neural net is needed over the semi-synthetic data. For the bidding component, we find that quantile truncation is much more robust to assumptions on the bidding model. |
| Researcher Affiliation | Collaboration | 1Harvard University, this work was done when the first author was an intern in Google Inc, NYC. 2Google Inc, NYC. |
| Pseudocode | Yes | Algorithm 1 Zeroth-order stochastic projected gradient framework for reserve optimization. |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code or a link to a code repository. |
| Open Datasets | No | The paper uses |
| Dataset Splits | No | The paper mentions collecting |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a |
| Experiment Setup | Yes | The parameters used in these algorithms are specified in Appendix B. (Appendix B content: Perturbation size βt = δ/rt; The learning rate αt = α0/ √ t; The number of samples nt = N/T; For demand curve training, the simple logistic model uses a learning rate of 0.01 and the neural network uses a learning rate of 0.001.) |