Decentralized sketching of low rank matrices

Authors: Rakshith Sharma Srinivasa, Kiryung Lee, Marius Junge, Justin Romberg

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we provide a scalable ADMM algorithm for the mixed-norm-based method and demonstrate its empirical performance via large-scale simulations. To complement our theoretical results, we observe the empirical performance of the mixed-norm-based method in a set of Monte Carlo simulations.
Researcher Affiliation Academia Rakshith S Srinivasa Dept. of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30318 rsrinivasa6@gatech.edu Kiryung Lee Dept. of Electrical and Computer Engineering Ohio State University Columbus, OH 43210 lee.8763@osu.edu Marius Junge Dept. of Mathematics University of Illinois-Urbana Champagne Urbana, IL, 61801 mjunge@illinois.edu Justin Romberg Dept. of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30318 jrom@ece.gatech.edu
Pseudocode Yes Algorithm 1 ADMM algorithm
Open Source Code No No explicit statement or link providing access to the open-source code for the described methodology was found.
Open Datasets No Matrices are set to be of size 1, 000 1, 000 and of rank 5. In our experiments we normalize the columns to have the same energy. We observe the estimation error by varying the degree of compression and the signal-to-noise (SNR) ratio. We compare the proposed method to the popular matrix LASSO, which minimizes the least squares loss
Dataset Splits No No specific training/validation/test dataset splits were mentioned, as the paper uses simulated data for Monte Carlo simulations rather than pre-existing datasets with defined splits.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory amounts, or cloud instances with specs) used for running experiments were mentioned.
Software Dependencies No The paper mentions 'standard convex optimization solvers like Se Du Mi. [16]' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Matrices are set to be of size 1, 000 1, 000 and of rank 5. In our experiments we normalize the columns to have the same energy. We observe the estimation error by varying the degree of compression and the signal-to-noise (SNR) ratio. We compare the proposed method to the popular matrix LASSO, which minimizes the least squares loss with a nuclear norm regularizer.