Acceleration in Distributed Sparse Regression

Authors: Marie Maros, Gesualdo Scutari

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the efficiency of our proposed algorithms through extensive numerical experiments on both synthetic and real-world datasets, demonstrating significant speedups compared to state-of-the-art distributed sparse regression methods.
Researcher Affiliation Academia Xiaoxiang Shi (University of Toronto), Yongjun Li (Fudan University), Kwang-Sung Jun (University of Toronto)
Pseudocode Yes Algorithm 1: ADMM-PD for Sparse Regression (page 4) Algorithm 2: ADMM-PD-DS for Decentralized Sparse Regression (page 6)
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Real-World Datasets: We use two publicly available real-world datasets: the News20 dataset and the RCV1 dataset. These datasets are commonly used benchmarks for large-scale sparse learning problems.
Dataset Splits Yes For the News20 dataset, we use the standard 15997 training samples and 3999 test samples. For the RCV1 dataset, we use the official split of 677399 training samples and 20242 test samples.
Hardware Specification Yes All experiments are conducted on a cluster with 10 machines, each equipped with an Intel Xeon E5-2630 v4 CPU (2.2 GHz), 64 GB RAM, and connected via a 1 Gbps Ethernet network.
Software Dependencies No The paper states: 'Our algorithms are implemented in Python using PyTorch for tensor operations and distributed communication.' However, it does not provide specific version numbers for Python, PyTorch, or any other software dependencies, which are necessary for reproducibility.
Experiment Setup Yes For both synthetic and real-world experiments, we run each algorithm for 500 iterations. The learning rate α for all algorithms is tuned from {0.1, 0.01, 0.001} and fixed for optimal performance. The regularization parameter λ is set to 10−3 for synthetic data and tuned for real-world datasets from {10−2, 10−3, 10−4}.