Context-Based Dynamic Pricing with Partially Linear Demand Model

Authors: Jinzhi Bu, David Simchi-Levi, Chonghuan Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct a numerical study to test the empirical performances of our algorithms. We measure the performance of a learning algorithm π by the relative regret defined as follows: PT t=1 E[p t d(p t ,xt) ptd(pt,xt)] PT t=1 E[p t d(p t ,xt)] 100%. For both models, we compare our algorithms with the linear greedy algorithm that estimates the demand function by a linear function and myopically selects the optimal price that maximizes the proxy revenue in each period t. For our DPLPE model, we also compare our algorithm with the random price shock (RPS) algorithm proposed by [25]. For each instance, we repeat our experiments for 50 independent runs, and approximate the relative regret by the empirical relative regret averaged over 50 runs.
Researcher Affiliation Academia Department of Logistics and Maritime Studies, Faculty of Business, The Hong Kong Polytechnic University Institute for Data, Systems, and Society, Department of Civil and Environmental Engineering, Operations Research Center, MIT Laboratory for Information and Decision Systems, MIT
Pseudocode Yes Algorithm 1: Algorithm for Dynamic Pricing with Linear Price (ADPLP); Algorithm 2: Algorithm for Dynamic Pricing with Linear Context (ADPLC); Algorithm 3: Regression with Linear Context (RLC)
Open Source Code No In the checklist, question 3a states: 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Section 4.' However, Section 4 describes the numerical study but does not contain a specific URL, a statement about code availability in supplementary material, or specific instructions for access to the source code.
Open Datasets No The paper uses synthetic data generated based on defined demand models and distributions, for example: 'Dt(pt) = µ(pt, xt) + εt, where µ(pt, xt) is an unknown function and {εt}t 1 is a sequence of i.i.d. sub-Gaussian random variables (r.v. s) with zero mean and variance proxy σ2' and 'For our DPLPE model with demand function d(p, xt) = bp + g(xt), we set b = 5, ε N(0, 5), [b, b] = [ 8, 3], [p, p] = [1, 19], xt Uniform([0, 1]d)'.
Dataset Splits No The paper describes a numerical study using synthetic data and repeated experiments for 50 independent runs, but it does not specify any explicit train, validation, or test data splits.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, memory, or type of computing resources used for the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or specific solvers).
Experiment Setup Yes For our DPLPE model with demand function d(p, xt) = bp + g(xt), we set b = 5, ε N(0, 5), [b, b] = [ 8, 3], [p, p] = [1, 19], xt Uniform([0, 1]d), and consider the following form of function g(x)... For our DPLCE model, we consider the following demand: Dt(pt) = 4/15δp2.5 t + 30 + 1/d1d xt + εt, where 1d denotes (1, 1, , 1) Rd, xt is uniformly distributed on [0, 1]d independently and εt N(0, 0.12). It can be verified that k = 2.5 in this case.In Figure 2(a), we fix d = 2, δ = 3.75 and [p, p] = [2.6, 3.8]...We also test the effect of δ on the empirical relative regret of ADPLC by varying β {3.75, 2.5, 1} while keeping d = 1.