Online Bin Packing with Predictions

Authors: Spyros Angelopoulos, Shahin Kamali, Kimia Shadkami

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments on our algorithms. Specifically, we evaluate them on a variety of publicly available benchmarks, such as the BPPLIB benchmarks [Delorme et al., 2018], but also on distributions studied in the context of offline bin packing, such as the Weibull distribution [Casti neiras et al., 2012]. The results show that our algorithms outperform the known, and efficient algorithms without any predictions that are typically used in practice.
Researcher Affiliation Academia 1Centre National de la Recherche Scientifique (CNRS) 2LIP6, Sorbonne Universit e, Paris, France 3University of Manitoba, Winnipeg, Manitoba, Canada
Pseudocode No The paper describes the algorithms PROFILEPACKING and HYBRID(λ) in natural language, but it does not include structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not explicitly state that the source code for the methodology is released or provide a link to a code repository. An arXiv link is provided in the references, but it's for the paper itself, not source code.
Open Datasets Yes We evaluate our algorithms on a variety of publicly available benchmarks, such as the BPPLIB benchmarks [Delorme et al., 2018], but also on distributions studied in the context of offline bin packing, such as the Weibull distribution [Casti neiras et al., 2012].
Dataset Splits No The paper describes how input sequences are generated and how predictions are derived (using a prefix), but it does not specify explicit train/validation/test dataset splits (e.g., percentages or sample counts) for the experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions algorithms used (e.g., FIRSTFITDECREASING) but does not list any software dependencies with specific version numbers (e.g., Python 3.x, PyTorch 1.x, specific solver versions).
Experiment Setup Yes We fix the size of the sequence to n = 10^6. We set the bin capacity to k = 100, and we also scale down each item to the closest integer in [1, k]. [...] In our experiments, we chose sh [1.0, 4.0]. [...] we thus set sc = 1000, as in [Casti neiras et al., 2012]. We fix the size of the profile set to m = 5000.