Pricing with Contextual Elasticity and Heteroscedastic Valuation

Authors: Jianyu Xu, Yu-Xiang Wang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Numerical Experiments Here we conduct numerical experiments to validate the low-regret performance of our algorithm Pw P. Since we are the first to study this heteroscadestic valuation model, we do not have a baseline algorithm working for exactly the same problem. However, we can modify the RMLP2 algorithm in Javanmard & Nazerzadeh (2019) by only replacing their max-likelihood estimator (MLE) for θ with a new MLE for both θ and η . This modified RMLP-2 algorithm does not have a regret guarantee in our setting, but it may still serve as a baseline to compare with. n the following part, we will compare the cumulative regrets of our ONSPP algorithm with the (modified) RMLP-2 in the following two scenarios: 1. The linear-fractional valuation yt = x t θ +Nt 2. A fully-linear valuation yt = x t θ + x t η Nt.
Researcher Affiliation Academia 1University of California, Santa Barbara 2University of California, San Diego.
Pseudocode Yes Algorithm 1 Pricing with Perturbation (Pw P) ... Algorithm 2 Online Newton Step (ONS)
Open Source Code No The paper does not contain an explicit statement about the release of source code or a link to a code repository.
Open Datasets No We test Pw P and the modified RMLP-2 on the demand model assumed in Eq. (1) with both stochastic and adversarial {xt} sequences, respectively. Basically, we assume T = 216 d = 2, Nt N(0, σ2) with σ = 0.5, and we repeatedly run each algorithm for 20 times in each experiment setting.
Dataset Splits No The paper describes generating simulation data but does not specify traditional training, validation, or test dataset splits.
Hardware Specification No The paper does not specify any particular hardware (e.g., CPU, GPU models, or cloud resources) used for conducting the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or specific libraries).
Experiment Setup Yes Basically, we assume T = 2^16 d = 2, Nt N(0, σ^2) with σ = 0.5, and we repeatedly run each algorithm for 20 times in each experiment setting. ... We implement and test Pw P and RMLP2 on stochastic {xt} s, where xt are iid sampled from N(µx, Σx) (for µx = [10, 10, . . . , 10] and some randomly sampled Σx) and then normalized s.t. xt 2 1. ... Here we design an adversarial {xt} sequence to attack both algorithms.