Emulating the Expert: Inverse Optimization through Online Learning
Authors: Andreas Bärmann, Sebastian Pokutta, Oskar Schneider
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Preliminary Computational Tests. While a full computational study is beyond the scope of this paper and left for future work, we implemented a first preliminary version of our algorithm, and we report computational results for a few select problems. and In Table 1, we show the computational results for the Integer Knapsack Problem with n = 1000 items. |
| Researcher Affiliation | Academia | 1Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany 2Georgia Institute of Technology, Atlanta, USA. |
| Pseudocode | Yes | Algorithm 1 Online Objective Function Learning |
| Open Source Code | No | The paper states: 'We have implemented our framework using python and Gurobi 7.0.1 (Gurobi Optimization, Inc., 2016).' but does not provide any link or explicit statement about making their own code open source. |
| Open Datasets | No | The paper states: 'We generated random instances for our computational results, considering T = 1000 observations for a varying number of goods n 2 {100, 500, 1000}.' It describes the generation process but does not provide access to a public dataset. |
| Dataset Splits | No | The paper describes an online learning setting where observations are revealed over time, and it does not specify explicit training, validation, and test dataset splits in a traditional sense. |
| Hardware Specification | Yes | Our preliminary computational experiments have been obtained on a Mac Book Pro (2016) with an Intel Core i5 CPU with two 2.00 GHz cores. |
| Software Dependencies | Yes | We have implemented our framework using python and Gurobi 7.0.1 (Gurobi Optimization, Inc., 2016). |
| Experiment Setup | No | The paper describes how problem instances were generated (e.g., 'The customer s unknown utility vector is chosen at random as (arbitrary) integer numbers from the interval [1, 1000] from a uniform distribution'), and refers to 'T = 1000 observations', but does not specify explicit hyperparameters or system-level training settings for their algorithm, such as a fixed learning rate or optimizer configurations. |