LERE: Learning-Based Low-Rank Matrix Recovery with Rank Estimation

Authors: Zhengqin Xu, Yulun Zhang, Chao Ma, Yichao Yan, Zelin Peng, Shoulie Xie, Shiqian Wu, Xiaokang Yang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that LERE surpasses state-of-the-art (SOTA) methods. The code for this work is accessible at https://github.com/zhengqinxu/LERE.
Researcher Affiliation Collaboration 1AMo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China 2ETH Zürich, Switzerland 3Signal Processing, RF & Optical Dept. Institute for Infocomm Research A*STAR, Singapore 4School of Information Science and Engineering, Wuhan University of Science and Technology, China {fate311, chaoma, yanyichao, zelin.peng, xkyang}@sjtu.edu.cn, yulun100@gmail.com, slxie@i2r.a-star.edu.sg, shiqian.wu@wust.edu.cn
Pseudocode Yes Algorithm 1: LERE for RPCA
Open Source Code Yes The code for this work is accessible at https://github.com/zhengqinxu/LERE.
Open Datasets Yes Suite Sparse Database 1 which provides general data matrices with low numerical ranks, is tested, and these outcomes are depicted in Fig. 4. 1https://sparse.tamu.edu/ Furthermore, we also conduct evaluation on the video dataset from the Scene Background Initialization (SBI) datasets 2 and the Low Dynamic Range (LDR) datasets 3. 2https://sbmi2015.na.icar.cnr.it/SBIdataset.html 3http://alumni.soe.ucsc.edu/ orazio/deghost.html
Dataset Splits No The paper describes testing on synthetic and real-world datasets but does not explicitly mention any training/validation/test splits of a single dataset. It mentions 'training one set of parameters by the small sparsity dataset' but no specific split percentages or sample counts.
Hardware Specification Yes All of our tests run on a Windows 10 laptop with Intel i7-9750H CPU, 64G RAM. The parameters learning processes run on an Ubuntu workstation with Intel i9-9900K CPU and two Nvidia RTX-2080Ti GPUs.
Software Dependencies No The paper mentions employing 'Feedforward Recurrent-Mixed Neural Network (FRMNN) (Cai, Liu, and Yin 2021)' and 'RPCA' but does not specify version numbers for programming languages (e.g., Python), libraries (e.g., PyTorch, TensorFlow), or other software dependencies.
Experiment Setup Yes In our method, the sampling numbers are I = 4r log(n1) and J = 4r log(n2); the iteration number of FRMNN is 17. All parameters in compared methods follow their default settings. Moreover, all of our tests run on a Windows 10 laptop with Intel i7-9750H CPU, 64G RAM. The parameters learning processes run on an Ubuntu workstation with Intel i9-9900K CPU and two Nvidia RTX-2080Ti GPUs. We corrupt the input matrix Y Rn1 n2 by sparse noise with varying corruption rates α = {0.1, 0.3, 0.5} and parameters: n1 = 3000, n2 = {3000, 1000, 500}, rank r = 5; the iteration stop criterion is Y L S F / Y F < 10 7, where L and S are the reconstruction matrices; the maximum number of iterations is 200.