Learning With Subquadratic Regularization : A Primal-Dual Approach

Authors: Raman Sankaran, Francis Bach, Chiranjib Bhattacharyya

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experiments To illustrate the efficiency of CP-η and ADMM-η over existing algorithms, we choose the aforementioned tree-sparsity inducing norm ΩH (Example 1)... Setup. We perform numerical simulations2 by generating synthetic data... We make the following inferences from the simulation plots given in Figure 1.
Researcher Affiliation Collaboration Raman Sankaran1,3 , Francis Bach2 and Chiranjib Bhattacharyya3 1Linked In, Bengaluru 2INRIA Ecole Normale Sup erieure PSL Research University, Paris 3Indian Institute of Science, Bengaluru
Pseudocode Yes Algorithm 1 CP [Chambolle and Pock, 2011]... Algorithm 2 ADMM-η... Algorithm 3 CP-η
Open Source Code No The paper does not provide any specific links to open-source code for the methodology described, nor does it explicitly state that the code is publicly available.
Open Datasets No We perform numerical simulations by generating synthetic data. Following [Bach et al., 2011], we generate X Rn d as Xij N(0, 1).
Dataset Splits No The paper performs numerical simulations by generating synthetic data and sets parameters like n, d, and λ, but it does not specify train, validation, or test dataset splits or cross-validation settings.
Hardware Specification Yes Conducted on a Ubuntu PC with Core i7 processor, 8G RAM.
Software Dependencies No The paper mentions running experiments on a 'Ubuntu PC' but does not specify any software dependencies (e.g., libraries, frameworks, or programming languages) with their version numbers.
Experiment Setup Yes We fixed n = 1000, d = 15000, λ = 0.01, and the convergence criteria was the relative duality gap (with threshold ϵ = 10 4).