Logarithmic Regret in Feature-based Dynamic Pricing

Authors: Jianyu Xu, Yu-Xiang Wang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct numerical experiments to validate EMLP and ONSP. In comparison with the existing work, we implement a discretized EXP-4 [Auer et al., 2002] algorithm for pricing, as is introduced in Cohen et al. [2020] (in a slightly different setting). We will test these three algorithms in both stochastic and adversarial settings.
Researcher Affiliation Academia Jianyu Xu Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 xu_jy15@ucsb.edu Yu-Xiang Wang Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 yuxiangw@cs.ucsb.edu
Pseudocode Yes Algorithm 1 Epoch-based max-likelihood pricing (EMLP) and Algorithm 2 Online Newton Step Pricing (ONSP)
Open Source Code Yes We included all codes and data in the supplementary material, along with a Readme document as instructions of running the program and reproduce our results.
Open Datasets No In the numerical experiments, we only used simulated data that has nothing to do with the natural sciences and do not include human subjects." The paper does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year) for a publicly available or open dataset.
Dataset Splits No The paper mentions running experiments for a certain number of rounds (T = 2^16) and repeating them multiple times, but it does not specify any training, validation, or test dataset splits in terms of percentages, counts, or predefined partitioning strategies.
Hardware Specification No We just ran all numerical experiments on a laptop. We did mention that the experiment of EXP-4 is very time-consuming." This statement only mentions a 'laptop' which is a general computing device, without providing specific details like CPU models, GPU models, or memory amounts.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9), needed to replicate the experiment.
Experiment Setup Yes Basically, we assume d = 2, B1 = B2 = B = 1 and Nt N(0, σ2) with σ = 0.25. In both settings, we conduct EMLP and ONSP for T = 2^16 rounds. For ONSP, we empirically select γ and ϵ that accelerates the convergence, instead of using the values specified in Lemma 11.