Semi-Parametric Dynamic Contextual Pricing

Authors: Virag Shah, Ramesh Johari, Jose Blanchet

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically test a scalable implementation of our algorithm and observe good performance.
Researcher Affiliation Academia Virag Shah Management Science and Engineering Stanford University California, USA 94305 virag@stanford.edu
Pseudocode No formal definitions are provided in Appendix B to save space. The provided text does not include Appendix B, thus no pseudocode is present in this excerpt.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets No First, we simulate our model with covariate dimension d = 2, where covariate vectors are i.i.d. d-dimensional standard normal random vectors, the parameter space is = [0, 1]d, the parameter vector is 0 = (1/√2), the noise support is Z = [0, 1], and the noise distribution is Z Uniform([0, 1]).
Dataset Splits No The paper describes generating synthetic data for simulations but does not specify training, validation, or test dataset splits for a pre-existing dataset.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using a "semi-parametric regression technique from Plan and Vershynin (2013)" but does not provide specific software names with version numbers.
Experiment Setup Yes In this setting, we simulate policies DEEP-C, Decoupled DEEP-C, and Sparse DEEP-C for time horizon n = 10, 000 and for different values of parameter γ. Each policy is simulated 5,000 times for each set of parameters. Next, we also simulate our model for d = 100 with s = 4 non-zero entries in 0, with each non-zero entry equal to 1/ps, each policy is simulated 1,500 times for each set of parameters, with the rest of the setup being the same as earlier.