On the Optimization Landscape of Tensor Decompositions
Authors: Rong Ge, Tengyu Ma
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we analyze the optimization landscape of the random over-complete tensor decomposition problem, which has many applications in unsupervised leaning, especially in learning latent variable models. We show that for any small constant ε > 0, among the set of points with function values (1 + ε)-factor larger than the expectation of the function, all the local maxima are approximate global maxima. Our main technique uses Kac-Rice formula and random matrix theory. |
| Researcher Affiliation | Collaboration | Rong Ge Duke University rongge@cs.duke.edu Tengyu Ma Facebook AI Research tengyuma@cs.stanford.edu |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements about the availability of source code, nor does it include a link to a code repository. |
| Open Datasets | No | The paper describes a theoretical model where 'vectors ai Rd are assumed to be drawn i.i.d from Gaussian distribution N(0, I),' rather than using a publicly available dataset for training. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical experiments involving dataset splits (training, validation, test). |
| Hardware Specification | No | The paper is theoretical and does not describe any computational experiments or hardware used. |
| Software Dependencies | No | The paper is theoretical and does not mention any software dependencies with specific version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations. |