Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent
Authors: Chi Jin, Sham M. Kakade, Praneeth Netrapalli
NeurIPS 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we propose the first provable, efficient online algorithm for matrix completion. Our proofs introduce a general framework to show that SGD updates tend to stay away from saddle surfaces and could be of broader interests to other non-convex problems. |
| Researcher Affiliation | Collaboration | Chi Jin UC Berkeley EMAIL Sham M. Kakade University of Washington EMAIL Praneeth Netrapalli Microsoft Research India EMAIL |
| Pseudocode | Yes | Algorithm 1 Online Algorithm for PSD Matrix Completion. Algorithm 2 Online Algorithm for Matrix Completion (Theoretical) Algorithm 3 Online Algorithm for Matrix Completion (Practical) |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper does not specify a publicly available or open dataset used for training. It refers to 'Initial set of uniformly random samples Ωinit' without providing access information. |
| Dataset Splits | No | The paper is theoretical and does not describe specific training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe the specific hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper discusses theoretical parameters like learning rate (η) and number of observations (T) but does not provide concrete hyperparameter values or system-level training settings for an empirical experimental setup. |