Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations

Authors: Quanming Yao, James Tin-Yau Kwok, Bo Han

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on a number of synthetic and real-world data sets show that the proposed algorithm is more efficient in both time and space, and is also more accurate than existing approaches.
Researcher Affiliation Collaboration 1 Paradigm Inc, Beijing, China 2 Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong 3 Center for Advanced Intelligence Project, RIKEN, Japan.
Pseudocode Yes Algorithm 1 NOnconvex Regularized Tensor (NORT).
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository.
Open Datasets Yes We use windows, tree and rice from (Hu et al., 2013),... Experiments are performed on three hyper-spectral data sets (Appendix C.3): Cabbage (1312 432 49), Scene (1312 951 49) and Female (592 409 148)... on the You Tube data set (Lei et al., 2009).
Dataset Splits Yes We use 50% of them for training, and the remaining 50% for validation. Testing is evaluated on the unobserved elements in O. (Synthetic Data) ... We randomly sample 10% of the pixels for training, which are then corrupted by Gaussian noise N(0, 0.01). Half of the training pixels are used for validation. (Color Images) ... We use 50% of the observations for training, another 25% for validation and the rest for testing. (Social Networks)
Hardware Specification Yes Experiments are performed on a PC with Intel-i8 CPU and 32GB memory.
Software Dependencies No The paper mentions implementation in Matlab and C, but does not provide specific version numbers for these software dependencies or any libraries used.
Experiment Setup Yes For NORT, τ has to be larger than ρ + DL (Corollary 3.6). However, a large τ leads to slow convergence (Remark 3.2). Hence, we set τ = 1.01(ρ+DL). Moreover, we set γ1 = 0.1 and p = 0.5 as in (Li et al., 2017). Besides, Fτ in step 5 of Algorithm 1 is hard to evaluate, and we use F instead as in (Zhong & Kwok, 2014).