Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Implicit Regularization in Matrix Sensing via Mirror Descent

Authors: Fan Wu, Patrick Rebeschini

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Numerical simulations In this section, we present numerical simulations examining the dependence of the final estimates of mirror descent equipped with the spectral entropy (5) and of gradient descent with full-rank parametrization X = UU , U Rn n, on the initialization size for random Gaussian sensing matrices A1, . . . , Am. We evaluate the nuclear norm X , the reconstruction error X X F , and the effective rank [28] effrank(X) = exp( Pn i=1 pi log pi), where pi = σi/ X , i = 1, . . . , n, denote the normalized singular values of X. Numerical simulations for matrix completion are provided in Appendix C and yield similar results as for random Gaussian sensing matrices.
Researcher Affiliation Academia Fan Wu Department of Statistics University of Oxford EMAIL Patrick Rebeschini Department of Statistics University of Oxford EMAIL
Pseudocode No The paper describes algorithmic updates using mathematical equations but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplemental material or https://github.com/fawuuu/irmsmd.
Open Datasets No Our experimental setup is as follows. We generate a rank-r positive semidefinite matrix by sampling a random matrix U Rn r with i.i.d. N(0, 1) entries, setting X = U (U ) and normalizing X = 1. We generate m symmetric sensing matrices Ai = 1 2(Bi + B i ), where the entries of Bi are i.i.d. N(0, 1).
Dataset Splits No The paper describes data generation for the experiments but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology).
Hardware Specification Yes The experiments for Figure 1 were implemented in Python 3.9 and took around 10 minutes on a machine with 1.1-GHz Intel Core i5 CPU and 8 GB of RAM.
Software Dependencies No The experiments for Figure 1 were implemented in Python 3.9... using the cvxopt package.
Experiment Setup Yes We run mirror descent and gradient descent with initialization X0 = αI and constant step sizes µ = 1 and µ = 0.25, respectively, for T = 5000 iterations, and vary the initialization size α from 10 1 to 10 10.