Revenue Optimization with Approximate Bid Predictions

Authors: Andres Munoz, Sergei Vassilvitskii

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7 Experiments, Figure 1: (a) Mean revenue of the three algorithms on the linear scenario. (b) Mean revenue of the three algorithms on the bimodal scenario. (c) Mean revenue on auction data.
Researcher Affiliation Industry Andr es Mu noz Medina Google Research 76 9th Ave New York, NY 10011 Sergei Vassilvitskii Google Research 76 9th Ave New York, NY 10011
Pseudocode Yes Algorithm 1. Reserve Inference from Clusters
Open Source Code No The paper does not contain an unambiguous statement that the source code for the methodology described is publicly available, nor does it provide a direct link to a code repository.
Open Datasets No For each experiment we generated a training dataset Strain, a holdout set Sholdout and a test set Stest each with 16,000 examples. we collected auction bid data from Ad Exchange for 4 different publisher-advertiser pairs. For each experiment, we extract a random training sample of 20,0000 points as well as a holdout and test sample. The paper describes using self-generated or internally collected data without providing access information or citing publicly available datasets.
Dataset Splits Yes For each experiment we generated a training dataset Strain, a holdout set Sholdout and a test set Stest each with 16,000 examples. Finally, the choice of hyperparameters γ for the Lipchitz loss and k for the clustering algorithm was done by selecting the best performing parameter over the holdout set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. It only implies computation was performed without specifying the underlying machines.
Software Dependencies No The paper mentions training a 'linear regressor' but does not specify any software libraries, frameworks, or their version numbers (e.g., TensorFlow, PyTorch, scikit-learn with specific versions).
Experiment Setup Yes Finally, the choice of hyperparameters γ for the Lipchitz loss and k for the clustering algorithm was done by selecting the best performing parameter over the holdout set. Following the suggestions in [Mohri and Medina, 2014] we chose γ {0.001, 0.01, 0.1, 1.0} and k {2, 4, . . . , 24}.