Learning to Clear the Market

Authors: Weiran Shen, Sebastien Lahaie, Renato Paes Leme

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate our approach, we fit a model of clearing prices over a massive dataset of bids in display ad auctions from a major ad exchange. The learned prices outperform other modeling techniques in the literature in terms of revenue and efficiency trade-offs.
Researcher Affiliation Collaboration Weiran Shen 1 S ebastien Lahaie 2 Renato Paes Leme 2 1Tsinghua University, Beijing, China 2Google Research, New York, New York, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statements about releasing source code or provide links to a code repository.
Open Datasets No We collected a dataset of auction records by sampling a fraction of the logs from Google s Ad Exchange over two consecutive days in January 2019. (This indicates an internal, proprietary dataset and does not provide public access information.)
Dataset Splits Yes We used the first day of data as the training set and the second day as the test set.
Hardware Specification No All the models were all fit using Tensor Flow with the default Adam optimizer and minibatches of size 512 distributed over 20 machines. (This statement mentions the number of machines but lacks specific hardware details like CPU/GPU models or memory.)
Software Dependencies No All the models were all fit using Tensor Flow with the default Adam optimizer and minibatches of size 512 distributed over 20 machines. (The paper mentions 'Tensor Flow' but does not specify a version number, nor does it list any other software dependencies with versions.)
Experiment Setup Yes All the models we evaluate are linear models of the price p as a function of features z of the auction records. ... The models were all fit using Tensor Flow with the default Adam optimizer and minibatches of size 512 distributed over 20 machines. ... The models were all trained over at least 400K iterations... For each loss function we added the match-rate regularization λ(p c)+, and we varied λ to span a range of realized match rates.