Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Mode-wise Tensor Decompositions: Multi-dimensional Generalizations of CUR Decompositions
Authors: HanQin Cai, Keaton Hamm, Longxiu Huang, Deanna Needell
JMLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical performance evaluations, with both synthetic and real-world datasets, establish the speed advantage of the tensor CUR approximations over other state-of-the-art low multilinear rank tensor approximations. |
| Researcher Affiliation | Academia | Han Qin Cai EMAIL Department of Mathematics University of California, Los Angeles Los Angeles, CA 90095, USA. Keaton Hamm EMAIL Department of Mathematics University of Texas at Arlington Arlington, TX 76019, USA. Longxiu Huang EMAIL Department of Mathematics University of California, Los Angeles Los Angeles, CA 90095, USA. Deanna Needell EMAIL Department of Mathematics University of California, Los Angeles Los Angeles, CA 90095, USA. |
| Pseudocode | Yes | Algorithm 1 Randomized Chidori CUR Decomposition. Algorithm 2 Randomized Fiber CUR Decomposition. |
| Open Source Code | Yes | The codes for Fiber and Chidori CUR decompositions are available online at https://github.com/caesarcai/Modewise_Tensor_ Decomp. |
| Open Datasets | Yes | We consider the use of the Fiber and Chidori CUR decompositions for hyperspectral image compression on three benchmark datasets from (Foster et al., 2004): Ribeira, Braga, and Ruivaes. The datasets can be found online at https://personalpages.manchester.ac.uk/staff/d.h.foster/Hyperspectral_images_of_natural_scenes_04.html. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits. The experiments described involve approximation and compression of whole tensors (synthetic or image data), rather than training models on partitioned datasets that would require explicit splits. |
| Hardware Specification | Yes | All tests are conducted on a Ubuntu workstation equipped with Intel i9-9940X CPU and 128GB DDR4 RAM. |
| Software Dependencies | Yes | All tests are conducted on a Ubuntu workstation equipped with Intel i9-9940X CPU and 128GB DDR4 RAM, and executed from Matlab R2020a. We use the implementations of HOSVD, st HOSVD and HOOI from tensor toolbox v3.12. |
| Experiment Setup | No | The paper describes general experimental settings like noise levels and tensor dimensions for synthetic data, and ranks for real-world data, but does not provide specific hyperparameter values or detailed system-level training configurations such as learning rates, batch sizes, or optimizer settings, as the methods are approximation techniques rather than iterative machine learning model training. |