Rank-1 Matrix Completion with Gradient Descent and Small Random Initialization

Authors: Daesung Kim, Hye Won Chung

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we study the rank-1 symmetric matrix completion and prove that GD converges to the ground truth when small random initialization is used. We show that in a logarithmic number of iterations, the trajectory enters the region where local convergence occurs. We provide an upper bound on the initialization size that is sufficient to guarantee the convergence, and show that a larger initialization can be used as more samples are available. We observe that the implicit regularization effect of GD plays a critical role in the analysis, and for the entire trajectory, it prevents each entry from becoming much larger than the others. and 7 Simulation In this section, we present some simulation results that support our theoretical findings.
Researcher Affiliation Collaboration Daesung Kim Samsung Electronics dskim95phd@gmail.com Hye Won Chung School of Electrical Engineering KAIST hwchung@kaist.ac.kr
Pseudocode No The paper describes the Gradient Descent update rule in equation form (e.g., Equation 1) but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code No No explicit statement or link indicating that the source code for the described methodology is publicly available was found.
Open Datasets No The paper describes generating synthetic data for simulations rather than using a publicly available or open dataset: "With the dimension n = 5000, we constructed the ground truth vector u by sampling it from the Gaussian distribution N(0, 1 n I) and normalizing it to have unit norm. We let λ = 1 so that the matrix M is given by u u , and we randomly sampled the matrix symmetrically with a sampling rate of p = 0.1 and Gaussian noise of σ = 0.1 n .".
Dataset Splits No The paper describes simulation experiments but does not provide specific details on train/validation/test dataset splits or cross-validation methodology.
Hardware Specification No The paper describes simulation parameters and trials, but does not provide any specific details about the hardware (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes With the dimension n = 5000, we constructed the ground truth vector u by sampling it from the Gaussian distribution N(0, 1 n I) and normalizing it to have unit norm. We let λ = 1 so that the matrix M is given by u u , and we randomly sampled the matrix symmetrically with a sampling rate of p = 0.1 and Gaussian noise of σ = 0.1 n . The initialization size was set to β0 = 1 n and a step size of 0.1 was used for GD.