The Noisy Power Method: A Meta Algorithm with Applications
Authors: Moritz Hardt, Eric Price
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix-vector multiplication. The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis. Our general analysis subsumes several existing ad-hoc convergence bounds and resolves a number of open problems in multiple applications |
| Researcher Affiliation | Industry | Moritz Hardt IBM Research Almaden Eric Price IBM Research Almaden |
| Pseudocode | Yes | Figure 1: Noisy Power Method (NPM); Figure 2: Streaming Power Method (SPM); Figure 3: Private Power Method (PPM). |
| Open Source Code | No | The paper does not provide any concrete access to source code. |
| Open Datasets | No | The paper focuses on theoretical analysis and does not describe experiments on publicly available datasets or provide access information for any datasets used in experiments. |
| Dataset Splits | No | The paper focuses on theoretical analysis and does not describe training, validation, or test splits for empirical experiments. |
| Hardware Specification | No | The paper does not specify any hardware used for experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper focuses on theoretical analysis and algorithm design, not empirical experimentation, and therefore does not include specific experimental setup details like hyperparameters or training configurations. |