Robust Quadratic Programming for Price Optimization

Authors: Akihiro Yabe, Shinji Ito, Ryohei Fujimaki

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on both artificial and actual price data show that our robust price optimization allows users to determine best risk-return trade-offs and to explore safe, profitable price strategies.
Researcher Affiliation Industry Akihiro Yabe, Shinji Ito, Ryohei Fujimaki NEC Corporation a-yabe@cq.jp.nec.com, s-ito@me.jp.nec.com, rfujimaki@nec-labs.com
Pseudocode Yes Algorithm 1 Golden Section Search; Algorithm 2 Coordinate Decent
Open Source Code No The paper does not contain any statement about releasing source code or provide a link to a code repository for the methodology described.
Open Datasets Yes We applied the proposed method to real sales history of beers4 [Ito and Fujimaki, 2017; Wang et al., 2015]. The data consisted of prices and sales quantities on 50 products over 642 days. ... The data has been provided by KSP-SP Co., LTD, http:// www.ksp-sp.com.
Dataset Splits No The paper discusses 'training data' (e.g., D = 5M, 10M, 20M) and evaluation on 'artificial and actual price data,' but it does not specify explicit training, validation, or test dataset splits with percentages, absolute counts, or references to predefined splits for reproducibility. It describes how artificial data was generated and the total size of the real-world dataset but no partitioning for different phases of model development and evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. It only mentions 'computational time'.
Software Dependencies Yes For solving this sample approximation, we used the MIQCP solver of GUROBI optimizer version 6.0, after convex approximation of ˆQ + Uj.
Experiment Setup Yes The true demand (regression) model followed (1), where v(x) = (x1, x2, . . . , x M, 1) were linear features with N = M + 1. Then the true coefficient matrix A was generated by a i,i U([ 2M, M]), a i,j U([0, 2]) if i = j, and a i,N U([M/2, 3M/2]). We defined the pricing strategies as X := {0.6, 0.7, 0.8, 0.9, 1.0}M where 1.0 was the list price and 0.9 is 10%-off. ... The training data ({xd, yd})D d=1 were generated by xd,i = 1.0, 0.9, 0.8, 0.7, 0.6 with probability 0.5, 0.2, 0.1, 0.1, 0.1, respectively for all i = 1, 2, . . . , M, where the system noise in yd followed N(0, 25IM). ... For each product, we set lower and upper bounds on the product price as 60% and 100% of its historically maximum price. We then conducted robust optimization for λ = 0, 30, 60, 90.