A Discretization Framework for Robust Contextual Stochastic Optimization

Authors: Rares C Cristian, Georgia Perakis

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Computational experiments on a variety of applications: We show with computational experiments that the proposed method is competitive in terms of the average error relative to existing approaches in the literature. In addition to testing our approach for linear optimization applications such as portfolio optimization using historical stock data, we also consider nonlinear optimization problem applications such as inventory allocation and electricity generation using real-world data. Finally, through these xperiments, we show significant improvement in terms of robustness. We obtain as much as 20 times lowers cost in the worst case when compared to other end-to-end learning methods and 5 times lower than other robust approaches.
Researcher Affiliation Academia Rares Cristian, Georgia Perakis Operations Research Center Massachusetts Institute of Technology, Cambridge, MA, USA {raresc,georgiap}@mit.edu
Pseudocode Yes Algorithm We summarize the algorithm as the following steps: (i) Define subsets Hϵ k = {w P : Rνk(w) ϵ} for each datapoint. (ii) Construct labels pn k to indicate whether w (νn) Hϵ k. (iii) Train ML model ˆpϵ k(x) on multi-label dataset (xn, (pn k)k=1,...,N). (iv) For out-of-sample x, take decision ˆwϵ,φ(x) = arg minw P k=1 ˆpϵ k(x) max{Rνk(w) φ, 0} (9)
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the methodology described.
Open Datasets Yes We use the same range of data as also used in Donti et al. (2017). Here, we must make decisions w R24 for the amount of electricity generation for each hour of the following day.
Dataset Splits No The paper mentions 'testing data' and 'training data' but does not provide specific percentages, absolute counts, or detailed methodology for dataset splits (e.g., 80/10/10, or number of samples in each split).
Hardware Specification No The paper does not provide specific details on the hardware used, such as GPU/CPU models, memory, or processor types.
Software Dependencies No The paper mentions various software components and methods like 'K-nearest neighbors', 'neural network', and the 'Opt Net framework', but it does not specify any version numbers for these or other software dependencies.
Experiment Setup Yes Experimental setup: We use the same unit cost parameters as well as data, and compare against the same models as in Donti et al. (2017). However, we present not only the average cost incurred on the testing data but also on various quantiles of the cost distribution.