Linear-Sample Learning of Low-Rank Distributions

Authors: Ayush Jain, Alon Orlitsky

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main result is a polynomial-time algorithm curated SVD that returns an estimate M cur := M cur(X) that essentially achieves the above lower bound for all five models and all matrices M. The proofs establish new results on the rapid convergence of the spectral distance between the model and observation matrices, and may be of independent interest.
Researcher Affiliation Academia Ayush Jain and Alon Orlitsky Dept. of Electrical and Computer Engineering University of California, San Diego {ayjain, aorlitsky}@eng.ucsd.edu
Pseudocode Yes The pseudo-code of the algorithm is in Appendix B.
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not perform empirical evaluations on a dataset, therefore no public dataset is specified.
Dataset Splits No The paper is theoretical and does not perform empirical evaluations on a dataset, therefore no training/validation/test splits are provided.
Hardware Specification No The paper is theoretical and does not mention specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe specific experimental setup details or hyperparameters.