A Novel Method to Solve Neural Knapsack Problems

Authors: Duanshun Li, Jing Liu, Dongeun Lee, Ali Seyedmazloom, Giridhar Kaushik, Kookjin Lee, Noseong Park

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method with two main deep learning-based experiments and one more secondary experiment with a linear KP benchmark set. and 4. Experiments We test our proposed method in two deep learning experiments, resampling point clouds, and transductive inferences of GCNs, and one more benchmark linear KP experiment.
Researcher Affiliation Collaboration 1University of Alberta, Edmonton, AB, Canada 2Walmart Labs., Reston, VA, USA 3Texas AM University-Commerce, Commerce, TX, USA 4George Mason University, Fairfax, VA, USA 5Arizona State University, Tempe, AZ, USA 6Yonsei University, Seoul, South Korea.
Pseudocode Yes Algorithm 1 Adaptive gradient ascent
Open Source Code No The paper mentions 'Cloud Compare Open Source Project Team' for a baseline tool, but does not provide an explicit statement or link to open-source code for the methodology described in this paper.
Open Datasets Yes We use the Princeton Model Net40 (Zhirong Wu et al., 2015) which contains 12,308 samples from various different object classes, e.g., sofa, desk, chair, etc. 2,468 samples were reserved for our resampling test and others were used to train Point Net. and We test two standard benchmark graphs: Cora and Citeseer. These datasets were used in many works such as (Yang et al., 2016; Kipf & Welling, 2016; Gao et al., 2018), to name a few.
Dataset Splits No The paper mentions '2,468 samples were reserved for our resampling test and others were used to train Point Net' for ModelNet40, indicating a train/test split but no explicit validation split. For GCNs, it mentions training and testing on subsets without specifying train/validation/test splits in percentages or counts.
Hardware Specification Yes We use Ubuntu 18.04 LTS, Python 3.6.6, NVIDIA Driver 417.22, CUDA 10, Tensor Flow 1.14.0, Numpy 1.14.5, Scipy 1.1.0, and machines with Intel Core i9 CPU and NVIDIA RTX 2080 Ti.
Software Dependencies Yes We use Ubuntu 18.04 LTS, Python 3.6.6, NVIDIA Driver 417.22, CUDA 10, Tensor Flow 1.14.0, Numpy 1.14.5, Scipy 1.1.0
Experiment Setup Yes We set B = 0.05, ξ = 0.1, γ = 0.0001, and k = 3, 000. We initialize ei with the Le Cun normal initializer (Sutskever et al., 2013) for our method. (...) We use the default hyperparameters of LGCN. For B, we test with B = {0.005, 0.001, 0.0001}. We use ξ = 0.1 as the mini-batch size. We set the initialization of ei to zero for all i, γ to 0.01, and k to 1000. We use the STE-based item selection network with the slope annealing step size s = 50 and the change rate r = 1.004 to generate xi from ei.