Distributed Low-rank Matrix Factorization With Exact Consensus

Authors: Zhihui Zhu, Qiuwei Li, Xinshuo Yang, Gongguo Tang, Michael B. Wakin

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate our conclusion for distributed low-rank matrix approximation, the left panel in Figure 1 shows the convergence of DGD+LOCAL for a low-rank matrix factorization problem whose setup is described in the supplementary material. Both the blue line (showing the objective value) and the red line (showing the consensus error) converge to zero. In contrast, the right panel in Figure 1 shows that DGD fails to achieve such optimality and consensus on a different, least squares problem. We also include experiments on distributed matrix completion and matrix sensing in the supplementary material.
Researcher Affiliation Academia Zhihui Zhu Mathematical Institute for Data Science Johns Hopkins University Baltimore, MD, USA zzhu29@jhu.edu Qiuwei Li Department of Electrical Engineering Colorado School of Mines Golden, CO, USA qiuli@mines.edu Xinshuo Yang Department of Electrical Engineering Colorado School of Mines Golden, CO, USA xinshuoyang@mines.edu Gongguo Tang Department of Electrical Engineering Colorado School of Mines Golden, CO, USA gtang@mines.edu Michael B. Wakin Department of Electrical Engineering Colorado School of Mines Golden, CO, USA mwakin@mines.edu
Pseudocode No The paper describes algorithms using mathematical equations (e.g., equations (5) and (17)) but does not include structured pseudocode or an 'Algorithm' block.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets No The paper refers to 'data matrix Y' and discusses 'distributed low-rank matrix approximation problem' but does not name or provide access information for any specific publicly available dataset used in the experiments.
Dataset Splits No The paper does not provide specific details on how datasets were split into training, validation, or test sets, nor does it refer to predefined splits with citations.
Hardware Specification No The paper does not mention any specific hardware (e.g., CPU, GPU models, cloud computing resources with specifications) used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup No The paper states that 'Full details are provided in the supplementary material' for the experimental setup, but the main text itself does not provide concrete hyperparameter values or detailed training configurations.