Approximately Optimal Core Shapes for Tensor Decompositions

Authors: Mehrdad Ghadiri, Matthew Fahrbach, Gang Fu, Vahab Mirrokni

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement our algorithm using an integer programming solver, and show that its solution quality is competitive with (and sometimes better than) the greedy algorithm that uses the true Tucker decomposition loss at each step, while also running up to 1000x faster. 6. Experiments We compare several algorithms for computing the core shapes of size-constrained Tucker decompositions on four real-world tensors (see Table 1).
Researcher Affiliation Collaboration 1Georgia Tech 2Google Research.
Pseudocode Yes Algorithm 1 Tucker HOSVD ... Algorithm 2 Tucker Packing Solver
Open Source Code Yes The source code is available at https://github.com/fahrbach/ approximately-optimal-core-shapes.
Open Datasets Yes Hyperspectral. 1024 1344 33 tensor of time-lapse hyperspectral radiance images of a nature scene that is undergoing illumination changes (Nascimento et al., 2016). ... COIL-100. 7200 128 128 3 tensor containing 7200 colored photos of 100 different objects (72 images per object) taken at 5-degree rotations (Nene et al., 1996).
Dataset Splits No The paper uses real-world tensors but does not provide specific details on how the data was split into training, validation, or testing sets, or refer to predefined splits.
Hardware Specification Yes All experiments use Num Py (Harris et al., 2020) with an Intel Xeon W-1235 processor (3.7 GHz, 8.25MB cache) and 128GB of RAM.
Software Dependencies No The paper mentions 'Num Py (Harris et al., 2020)' and 'scipy.optimize.mlip' but does not specify version numbers for these software dependencies.
Experiment Setup Yes Each L(X, r) computation uses 20 iterations of HOOI.