Learning to bid in revenue-maximizing auctions

Authors: Thomas Nedelec, Noureddine El Karoui, Vianney Perchet

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our study is done in the setting where one bidder is strategic. Using a variational approach, we study the complexity of the original objective and we introduce a relaxation of the objective functional in order to use gradient descent methods. Our approach is simple, general and can be applied to various value distributions and revenuemaximizing mechanisms. The new strategies we derive yield massive uplifts compared to the traditional truthfully bidding strategy. ... We numerically optimize this new objective through a simple neural network and get very significant improvements in bidder utility compared to truthful bidding.
Researcher Affiliation Collaboration 1Criteo AI Lab 2CMLA, ENS Paris Saclay 3UC, Berkeley. Correspondence to: Thomas Nedelec <nedelec@cmla.enscachan.fr>.
Pseudocode Yes Algorithm 1 Boosted second price (r, γ)
Open Source Code Yes We finally provide the code in Py Torch that has been used to run the different experiments. ... The full code in Py Torch is provided with the paper.
Open Datasets No The paper mentions using
Dataset Splits No The paper mentions
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for the experiments.
Software Dependencies No The paper mentions
Experiment Setup Yes To fit the optimal strategies, we use a simple one-layer neural network with 200 Re Lus. We replace the indicator function by a sigmoid function to have a fully differentiable objective and we optimize Uη(βi) = E (Xi hβi(Xi))Gi(β(Xi))σ(ηhβi(Xi)) . with σ(x) = 1 1+exp( x) and η = 1000. We start with a batch size of 10000 examples, sampled according to the value distribution of the bidder. We use a stochastic gradient algorithm (SGD) with a decreasing learning rate starting at 0.001.