A Study of Compact Reserve Pricing Languages

Authors: MohammadHossein Bateni, Hossein Esfandiary, Vahab Mirrokni, Saeed Seddighin

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Furthermore, our empirical study confirms the effectiveness of multiplicative pricing in practice. In fact, the simulations show that our algorithm obtains 90 98% of the value of the best solution that sets the reserve prices for each auction individually (i.e., the optimum set of reserve prices).
Researcher Affiliation Collaboration Google, University of Maryland {bateni,mirrokni}@google.com, {esfandiari, seddighin}@cs.umd.edu
Pseudocode No The paper describes the algorithm in Section 5 using natural language but does not present it in a structured pseudocode or algorithm block format.
Open Source Code No The paper does not provide any concrete access information, such as a specific repository link or an explicit statement about code release, for the described methodology.
Open Datasets No Our experiments are based on millions of bids submitted by advertisers to Google Advertising Exchange, Ad X. The paper does not provide concrete access information (link, DOI, specific citation) for a publicly available dataset.
Dataset Splits No The paper mentions using data from 'Google Advertising Exchange, Ad X' and '18 different data sets', but it does not specify any training, validation, or test dataset splits (e.g., percentages, counts, or cross-validation setup) needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or processor types) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9), needed to replicate the experiments.
Experiment Setup No While the paper describes the general experimental setting and comparisons (uniform vs. unrestricted), it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or precise training configurations for its iterative algorithm.