Generalized Inverse Optimization through Online Learning
Authors: Chaosheng Dong, Yiran Chen, Bo Zeng
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we show the online learning approach can learn the parameters with great accuracy and is very robust to noises, and achieves a dramatic improvement in computational efficacy over the batch learning approach. |
| Researcher Affiliation | Academia | Chaosheng Dong Department of Industrial Engineering University of Pittsburgh chaosheng@pitt.edu Yiran Chen Department of Electrical and Computer Engineering Duke University yiran.chen@duke.edu Bo Zeng Department of Industrial Engineering University of Pittsburgh bzeng@pitt.edu |
| Pseudocode | Yes | Algorithm 1 Implicit Online Learning for Generalized Inverse Optimization |
| Open Source Code | No | The paper does not include an explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | No | The paper describes a synthetic data generation process for its experiments, stating: 'The (price,noisy decision) pair in each round is generated as follows. In round t, we generate the prices from a uniform distribution, i.e. pt i U[pmin, pmax], with pmin = 5 and pmax = 25. Then, we solve UMP and get the optimal decision xt. Next, the noisy decision yt is obtained by corrupting xt with noise that has a jointly uniform distribution with support [ 0.25, 0.25]2. Namely, yt = xt + ϵt, where each element of ϵt U( 0.25, 0.25).' |
| Dataset Splits | No | The paper describes an online learning setting where data arrives sequentially over 'T rounds' and does not specify traditional train/validation/test dataset splits. Experiments involve generating data on the fly and evaluating performance over these sequential rounds, e.g., 'Learning the utility function over T = 1000 rounds'. |
| Hardware Specification | Yes | Our preliminary experiments have been run on Bridges system at the Pittsburgh Supercomputing Center (PSC) [23]. |
| Software Dependencies | No | The paper states: 'The mixed integer second order conic programs... are solved by Gurobi. All the algorithms are programmed with Julia [24].' It mentions software names but does not provide specific version numbers for Gurobi or Julia. |
| Experiment Setup | Yes | The learning rate is set to ηt = 5/ t. (for utility function), The learning rate is set to ηt = 100/ t. (for budget), The learning rate is set to ηt = 2/ t. (for transportation cost). |