Contextual Reserve Price Optimization in Auctions via Mixed Integer Programming

Authors: Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we present computational results, showcasing that the MIP formulation, along with its LP relaxation, are able to achieve superior inand out-of-sample performance, as compared to state-of-the-art algorithms on both real and synthetic datasets.
Researcher Affiliation Collaboration Joey Huchette Rice University joehuchette@rice.edu Haihao Lu University of Chicago haihao.lu@chicagobooth.edu Hossein Esfandiari Google Research esfandiari@google.com Vahab Mirrokni Google Research mirrokni@google.com
Pseudocode No No structured pseudocode or algorithm blocks are present in the paper. Algorithms are described mathematically and in prose.
Open Source Code Yes Our implementation is publicly available at: https://github.com/ joehuchette/reserve-price-optimization.
Open Datasets Yes We use a published medium-size e Bay data set for reproducibility, which comprises 70,000 sports memorabilia auctions, to illustrate the performance of our algorithms. The data set is provided by Jay Grossman and was subsequently studied in the context of reserve price optimization [38].4 Ebay Data Set , accessed May 25 2020 from https://cims.nyu.edu/~munoz/data/.
Dataset Splits Yes We fix d = 50 features, n = 1000 training samples, along with test and validation data sets each with 5000 samples.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running the experiments are provided.
Software Dependencies Yes We use Ju MP [21, 35] and Gurobi v8.1.1 [25] to model and solve, respectively, the optimization problems underlying the MIP, MIP-R, LP, and DC methods.
Experiment Setup Yes Hyperparameter tuning. The LP, MIP-R, and MIP algorithms require that the parameter domain X is explicitly specified. We utilize cross validation to tune the bounds on each parameter as [ T, +T] for T 2 {2 1, . . . , 29}. Additionally, DC requires two hyperparameters: one for a penalty associated with the bound constraints, and the second for the slope of its continuous approximation of the discontinuous reward function r. We do cross-validation as suggested in Mohri and Medina [38].