Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dynamic Pricing with Monotonicity Constraint under Unknown Parametric Demand Model
Authors: Su Jia, Andrew Li, R Ravi
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We consider a Continuum-Armed Bandit problem with an additional monotonicity constraint (or markdown constraint) on the actions selected. This problem faithfully models a natural revenue management problem, called markdown pricing , where the objective is to adaptively reduce the price over a ο¬nite horizon to maximize the expected revenues. Chen ([3]) and Jia et al ([9]) recently showed a tight T 3/4 regret bound over T rounds under minimal assumptions of unimodality and Lipschitzness in the reward function. This bound shows that markdown pricing is strictly harder than unconstrained dynamic pricing (i.e., without the monotonicity constraint), which admits T 2/3 regret under the same assumptions ([11]). However, in practice, demand functions are usually assumed to have certain functional forms (e.g. linear or exponential), rendering the demand learning easier and suggesting better regret bounds. In this work we introduce a concept, markdown dimension, that measures the complexity of a parametric family, and present optimal regret bounds that improve upon the previous T 3/4 bound under this framework. |
| Researcher Affiliation | Academia | Tepper School of Business Carnegie Mellon University EMAIL |
| Pseudocode | Yes | Algorithm 1 Cautious Myopic (CM) Policy. ... Algorithm 2 Iterative Cautious Myopic (ICM) Policy. ... Algorithm 3 Uniform Elimination Policy (UEm, ). |
| Open Source Code | No | The paper does not provide any explicit statements about making its source code available or links to a code repository. |
| Open Datasets | No | The paper is theoretical and does not conduct empirical experiments with datasets. Therefore, there is no mention of a dataset used for training or its public availability. |
| Dataset Splits | No | The paper is theoretical and does not conduct empirical experiments with datasets. Therefore, there is no mention of dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies or versions. |
| Experiment Setup | No | The paper is theoretical and focuses on mathematical proofs and algorithms. It does not describe an experimental setup with hyperparameters or training configurations. |