Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Nonnegative Tensor Completion via Integer Optimization
Authors: Caleb Bugg, Chen Chen, Anil Aswani
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here we present results that show the efficacy and scalability of our algorithm for nonnegative tensor completion. Our experiments were conducted on a laptop computer... |
| Researcher Affiliation | Academia | University of California, Berkeley, EMAIL The Ohio State University, EMAIL |
| Pseudocode | Yes | Algorithm 1: Weak Separation Oracle for CĪ»; Algorithm 2: Alternating Maximization |
| Open Source Code | Yes | There is new code in the Supplemental Material. |
| Open Datasets | No | The paper does not use a publicly available or open dataset. Instead, the true tensor was constructed for the experiments: 'In each experiment, the true tensor Ļ was constructed by randomly choosing 10 points from S1 and then taking a random convex combination.' |
| Dataset Splits | No | The paper does not explicitly state dataset splits for training, validation, or testing. It mentions using 'n samples' for tensor completion. |
| Hardware Specification | Yes | Our experiments were conducted on a laptop computer with 8GB of RAM and an Intel Core i5 2.3Ghz processor with 2-cores/4-threads. |
| Software Dependencies | Yes | The algorithms were coded in Python 3. We used Gurobi v9.1 (Gurobi Optimization, LLC, 2021) to solve the integer programs (13). |
| Experiment Setup | Yes | To minimize the impact of hyperparameter selection in our numerical results, we provided the ground truth values when possible. For instance, in our nonnegative tensor completion formulation (8) we chose Ī» to be the smallest value for which we could certify that Ļ + Ī» for the true tensor Ļ. This was accomplished by construction of the true tensor Ļ. For ALS and TNCP, we used a nonnegative rank k that was the smallest value for which we could certify that rank+(Ļ) k. |