PackIt: A Virtual Environment for Geometric Planning

Authors: Ankit Goyal, Jia Deng

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also construct a set of challenging packing tasks using an evolutionary algorithm. Further, we study various baselines for the task that include model-free learning-based and heuristic-based methods, as well as search-based optimization methods that assume access to the model of the environment.2
Researcher Affiliation Academia 1Department of Computer Science, Princeton University, Princeton, NJ, USA. Correspondence to: Ankit Goyal <agoyal@princeton.edu>, Jia Deng <jiadeng@cs.princeton.edu>.
Pseudocode Yes Algorithm 1 Pack Creator
Open Source Code Yes Code and data are available at https://github.com/ princeton-vl/Pack It.
Open Datasets Yes Shapes: The shapes for generating packs comes from Shape Net (Chang et al., 2015), a large-scale dataset of 3D models. [...] In total, for generating the packs for training, testing and validation, we have sets of 4207, 4196 and 4185 shapes respectively. Our dataset contains 991 training, 515 testing and 503 validation packs, with an average of 22.6 shapes per pack.
Dataset Splits Yes Our dataset contains 991 training, 515 testing and 503 validation packs, with an average of 22.6 shapes per pack. After every 5k intervals, we evaluate performance on validation samples. Note that we used a subset of validation packs (80 packs) for faster training.
Hardware Specification Yes The evolution process is computationally expensive, and for generating one pack, it takes around 10 hours for a system with 2 CPUs(Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz) and 16GB RAM.
Software Dependencies No The paper mentions software like Unity, Unity ML-agents, OpenAI gym, and Proximal Policy Optimization (PPO), but it does not provide specific version numbers for these software components.
Experiment Setup Yes We train Pack NN for 200k Shape Selection steps (which took 6 days using 1 GPU, 8 parallel environments). After every 5k intervals, we evaluate performance on validation samples.