Duration-of-Stay Storage Assignment under Uncertainty

Authors: Michael Lingzhi Li, Elliott Wolf, Daniel Wintz

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 COMPUTATION RESULTS, 7 REAL-LIFE IMPLEMENTATION RESULTS, We then trained five separate neural networks and two baselines to evaluate the effectiveness of Parallel Net., Table 1: Table of Prediction Results for Different Machine Learning Architectures, As seen in Table 2, Parallel Net has a MAPE of 29%.
Researcher Affiliation Collaboration Michael Lingzhi Li Operation Research Center Massachusetts Institute of Technology Cambridge, MA 02139 mlli@mit.edu, Elliott Wolf Lineage Logistics San Francisco, California ewolf@lineagelogistics.com, Daniel Wintz Lineage Logistics San Francisco, California dwintz@lineagelogistics.com
Pseudocode No The paper describes the framework and architecture but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No Academic users can currently obtain the dataset by inquiring at dwintz@lineagelogistics.com. It would be hosted online in the near future. (This refers to the dataset, not the code for the methodology.)
Open Datasets Yes We release the above dataset, which as far as the authors know, is the first publicly available dataset of warehousing records. Academic users can currently obtain the dataset by inquiring at dwintz@lineagelogistics.com. It would be hosted online in the near future.
Dataset Splits Yes Training Set: All shipments that exited the warehouse before 2017/06/30, consisting about 60% of the entire dataset. Testing Set: All shipments that arrived at the warehouse after 2017/06/30 and left the warehouse before 2017/07/30, consisting about 7% of the entire dataset. Extended Testing Set: All shipments that arrived at the warehouse after 2017/09/30 and left the warehouse before 2017/12/31, consisting about 14% of the entire dataset. The learning rate, decay, and number of training epochs are 10-fold cross-validated.
Hardware Specification Yes We used a 6-core i7-5820K, GTX1080 GPU, and 16GB RAM.
Software Dependencies Yes All neural networks are trained on Tensorflow 1.9.0 with Adam optimizer (Kingma & Ba, 2014). The GBM is trained on R 3.4.4 with the lightgbm package, and number of trees 10-fold cross-validated over the training set.
Experiment Setup Yes We limit the description to the first five words with zero padding. The learning rate, decay, and number of training epochs are 10-fold cross-validated. In interest of brevity, we omit the detailed architecture choice in RNN and CNN along with the output layer structure and include it in Appendix 9.2.