Nonnegative Tensor Completion via Integer Optimization

Authors: Caleb Bugg, Chen Chen, Anil Aswani

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we present results that show the efficacy and scalability of our algorithm for nonnegative tensor completion. Our experiments were conducted on a laptop computer...
Researcher Affiliation Academia University of California, Berkeley, {caleb_bugg,aaswani}@berkeley.edu The Ohio State University, chen.8018@osu.edu
Pseudocode Yes Algorithm 1: Weak Separation Oracle for Cλ; Algorithm 2: Alternating Maximization
Open Source Code Yes There is new code in the Supplemental Material.
Open Datasets No The paper does not use a publicly available or open dataset. Instead, the true tensor was constructed for the experiments: 'In each experiment, the true tensor ψ was constructed by randomly choosing 10 points from S1 and then taking a random convex combination.'
Dataset Splits No The paper does not explicitly state dataset splits for training, validation, or testing. It mentions using 'n samples' for tensor completion.
Hardware Specification Yes Our experiments were conducted on a laptop computer with 8GB of RAM and an Intel Core i5 2.3Ghz processor with 2-cores/4-threads.
Software Dependencies Yes The algorithms were coded in Python 3. We used Gurobi v9.1 (Gurobi Optimization, LLC, 2021) to solve the integer programs (13).
Experiment Setup Yes To minimize the impact of hyperparameter selection in our numerical results, we provided the ground truth values when possible. For instance, in our nonnegative tensor completion formulation (8) we chose λ to be the smallest value for which we could certify that ψ + λ for the true tensor ψ. This was accomplished by construction of the true tensor ψ. For ALS and TNCP, we used a nonnegative rank k that was the smallest value for which we could certify that rank+(ψ) k.