Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Decentralized sketching of low rank matrices

Authors: Rakshith Sharma Srinivasa, Kiryung Lee, Marius Junge, Justin Romberg

NeurIPS 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we provide a scalable ADMM algorithm for the mixed-norm-based method and demonstrate its empirical performance via large-scale simulations. To complement our theoretical results, we observe the empirical performance of the mixed-norm-based method in a set of Monte Carlo simulations.
Researcher Affiliation Academia Rakshith S Srinivasa Dept. of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30318 EMAIL Kiryung Lee Dept. of Electrical and Computer Engineering Ohio State University Columbus, OH 43210 EMAIL Marius Junge Dept. of Mathematics University of Illinois-Urbana Champagne Urbana, IL, 61801 EMAIL Justin Romberg Dept. of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30318 EMAIL
Pseudocode Yes Algorithm 1 ADMM algorithm
Open Source Code No No explicit statement or link providing access to the open-source code for the described methodology was found.
Open Datasets No Matrices are set to be of size 1, 000 1, 000 and of rank 5. In our experiments we normalize the columns to have the same energy. We observe the estimation error by varying the degree of compression and the signal-to-noise (SNR) ratio. We compare the proposed method to the popular matrix LASSO, which minimizes the least squares loss
Dataset Splits No No specific training/validation/test dataset splits were mentioned, as the paper uses simulated data for Monte Carlo simulations rather than pre-existing datasets with defined splits.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory amounts, or cloud instances with specs) used for running experiments were mentioned.
Software Dependencies No The paper mentions 'standard convex optimization solvers like Se Du Mi. [16]' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Matrices are set to be of size 1, 000 1, 000 and of rank 5. In our experiments we normalize the columns to have the same energy. We observe the estimation error by varying the degree of compression and the signal-to-noise (SNR) ratio. We compare the proposed method to the popular matrix LASSO, which minimizes the least squares loss with a nuclear norm regularizer.