Convergence of Stochastic Gradient Descent for PCA
Authors: Ohad Shamir
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we provide (to the best of our knowledge) the first eigengap-free convergence guarantees for SGD in the context of PCA. This also partially resolves an open problem posed in (Hardt & Price, 2014). Moreover, under an eigengap assumption, we show that the same techniques lead to new SGD convergence guarantees with better dependence on the eigengap. |
| Researcher Affiliation | Academia | Ohad Shamir OHAD.SHAMIR@WEIZMANN.AC.IL Weizmann Institute of Science, Israel |
| Pseudocode | Yes | Initialize by picking a unit norm vector w0 For t = 1, . . . , T, perform wt = (I + η At)wt 1 Return w T w T |
| Open Source Code | No | The paper does not contain any statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper is theoretical and does not use or mention specific datasets for training. It refers to 'data x1, x2, . . . Rd' as i.i.d. from an 'unknown underlying distribution' in a theoretical context. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical experiments or dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require specific hardware, therefore no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not describe any software implementations with specific version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with concrete hyperparameter values or system-level training settings. |