Doubly Sparse Asynchronous Learning for Stochastic Composite Optimization
Authors: Runxue Bao, Xidong Wu, Wenhan Xian, Heng Huang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experimental results on benchmark datasets confirm the superiority of our proposed method. |
| Researcher Affiliation | Academia | Electrical and Computer Engineering Department, University of Pittsburgh, PA, USA {runxue.bao, xidong wu, wex37, heng.huang}@pitt.edu |
| Pseudocode | Yes | Algorithm 1 Sha-DSAL-Naive; Algorithm 2 Sha-DSAL; Algorithm 3 Dis-DSAL (Server Node); Algorithm 4 Dis-DSAL (Worker Node k) |
| Open Source Code | No | The paper states 'We implement all the methods in C++.' but does not provide any link or explicit statement about the availability of the source code. |
| Open Datasets | Yes | We use three real-world datasets in Table 2, which are from LIBSVM [Chang and Lin, 2011] at https://www.csie.ntu.edu. tw/~cjlin/libsvmtools/datasets/. |
| Dataset Splits | No | The paper mentions using three real-world datasets but does not specify how these datasets were split into training, validation, or test sets (e.g., percentages or sample counts for each split). |
| Hardware Specification | Yes | We run all the methods on 2.10 GHz Intel(R) Xeon(R) CPU machines. |
| Software Dependencies | No | We implement all the methods in C++. We employ Open MP and Open MPI as the parallel framework for shared-memory and distributed-memory architecture respectively. The paper names software components but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | The inner loop size and the step size are chosen to obtain the best performance. Parameter λ is selected as 4 10 6λmax, 2 10 3λmax, and 1 10 3λmax for KDD 2010, Avazu-app, and Avazu-site dataset respectively where λmax is a parameter that, for all λ λmax, x must be 0. |