Sample Complexity of Posted Pricing for a Single Item

Authors: Billy Jin, Thomas Kesselheim, Will Ma, Sahil Singla

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper investigates how many samples are needed from buyers value distributions to find near-optimal posted prices, considering both independent and correlated buyer distributions, and welfare versus revenue maximization. We obtain matching upper and lower bounds (up to logarithmic factors) on the sample complexity for all these settings.
Researcher Affiliation Academia Cornell University, School of Operations Research and Information Engineering, bzj3@cornell.edu. University of Bonn, Institute of Computer Science and Lamarr Institute for Machine Learning and Artificial Intelligence, thomas.kesselheim@uni-bonn.de Columbia University, Graduate School of Business and Data Science Institute, wm2428@gsb.columbia.edu. Georgia Tech, School of Computer Science, ssingla@gatech.edu.
Pseudocode No The paper describes algorithms (e.g., dynamic programming solution) in prose but does not provide formal pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statements about releasing open-source code for the methodology described, nor does it provide any links to a code repository.
Open Datasets No This is a theoretical paper that does not conduct experiments on datasets, therefore it does not use or provide information about publicly available training data.
Dataset Splits No This is a theoretical paper and does not involve empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require hardware specifications.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or system-level training settings.