Efficient Structured Matrix Rank Minimization
Authors: Adams Wei Yu, Wanli Ma, Yaoliang Yu, Jaime Carbonell, Suvrit Sra
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical results show that our approach significantly outperforms state-of-the-art competitors in terms of running time, while effectively recovering low rank solutions in stochastic system realization and spectral compressed sensing problems. 3 Experiments In this section, we present empirical results using our algorithms. |
| Researcher Affiliation | Academia | School of Computer Science, Carnegie Mellon University Max Planck Institute for Intelligent Systems {weiyu, mawanli, yaoliang, jgc}@cs.cmu.edu, suvrit@tuebingen.mpg.de |
| Pseudocode | Yes | Algorithm 1 Generalized Conditional Gradient for Structured Matrix Rank Minimization 1: Initialize U0, V0; 2: for k = 1, 2, ... do 3: (uk, vk) top singular vector pair of rf(Uk 1Vk 1); 4: set k 2/(k + 1), and k by (13); 5: Uinit (p1 k Uk 1, p kuk); Vinit (p1 k Vk 1, p kvk); 6: (Uk, Vk) arg min (U, V ) using initializer (Uinit, Vinit); 7: end for |
| Open Source Code | No | The paper does not provide any explicit statements about making the source code available or include links to a code repository. |
| Open Datasets | No | Data generation. Each entry of the matrices D 2 Rr r, E 2 Rr n, F 2 Rn r is sampled from a Gaussian distribution N(0, 1). Then they are normalized to have unit nuclear norm. The initial state vector s0 is drawn from N(0, Ir) and the input white noise ut from N(0, In). The measurement noise is modeled by adding an σ term to the output zt, so the actual observation is zt = zt + σ , where each entry of 2 Rn is a standard Gaussian noise, and σ is the noise level. [...] we generate a ground truth data matrix Y 2 R101 101 through a superposition of r = 6 2-D sinusoids, randomly reveal 20% of the entries, and add i.i.d Gaussian noise with amplitude signal-to-noise ratio 10. |
| Dataset Splits | No | The paper describes how data is generated for experiments but does not provide specific details on training, validation, or test splits (e.g., percentages, sample counts, or a standard split citation). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. It mentions algorithms like Lanczos and FISTA but not specific software implementations with versions. |
| Experiment Setup | Yes | Throughout this experiment, we set T = 1000, σ = 0.05, the maximum iteration limit as 100, and the stopping criterion as kxk+1 xkk F < 10 3 or |φk+1 φk| | min(φk+1,φk)| < 10 3. The initial iterate is a matrix of all ones. [...] using k1 = k2 = 8, µ = 0.1. |