Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Fast Tensor Completion via Approximate Richardson Iteration
Authors: Mehrdad Ghadiri, Matthew Fahrbach, Yunbum Kook, Ali Jadbabaie
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We analyze the convergence rate of our approximate Richardson iteration-based algorithm, and our empirical study shows that it can be 100x faster than direct methods for CP completion on real-world tensors. |
| Researcher Affiliation | Collaboration | 1MIT 2Google Research 3Georgia Tech. Correspondence to: Mehrdad Ghadiri <EMAIL>. |
| Pseudocode | Yes | Algorithm 1: approx-mini-als |
| Open Source Code | Yes | The code is available at https://github.com/fahrbach/fast-tensor-completion. |
| Open Datasets | Yes | HYPERSPECTRAL is 1024 Γ 1344 Γ 33 tensor of time-lapse hyperspectral radiance images (Nascimento et al., 2016). |
| Dataset Splits | Yes | For a given tensor X and sample ratio p β [0, 1], let Xβ¦be a partially observed tensor with a random p fraction of entries revealed. We fit Xβ¦with a rank-R CP decomposition by minimizing the training relative reconstruction error (RRE) (b X β X)β¦ F/ Xβ¦ F using different ALS algorithms. ...the test RRE b X β X F/ X F (dashed line) is the loss on the entire (mostly unobserved) tensor |
| Hardware Specification | Yes | All experiments are implemented with NumPy (Harris et al., 2020) and Tensorly (Kossaifi et al., 2019) on an Apple M2 chip with 8 GB of RAM. |
| Software Dependencies | Yes | All experiments are implemented with NumPy (Harris et al., 2020) and Tensorly (Kossaifi et al., 2019) on an Apple M2 chip with 8 GB of RAM. |
| Experiment Setup | Yes | For these experiments, we initialize X = Y = I. ...we set p = 0.1 and sweep over the rank R using direct to demonstrate how the train RRE (solid line) and test RRE b X β X F/ X F (dashed line) decrease as a function of R and the ALS step count. In the second column, we fix the parameters (p, R) = (0.1, 16) |