Orthant-Wise Passive Descent Algorithms for Training L1-Regularized Models
Authors: Jianqiao Wangni
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical Experiments First, we implement OPDA in MATLAB, based on the code generously provided by the authors of (Gower, Goldfarb, and Richt arik 2016). We verify the algorithm s efficiency by logistic regression with L2 and L1 regularizations for binary classification task. ... We use datasets from (Chang and Lin 2011), including covtype (N = 581K, D = 54) and rcv1 (N = 20K, D = 47K). |
| Researcher Affiliation | Industry | Jianqiao Wangni Tencent AI Lab Shenzhen, China zjnqha@gmail.com |
| Pseudocode | Yes | Algorithm 1 Orthant-Wise Passive Descent Algorithms |
| Open Source Code | No | The paper states, 'First, we implement OPDA in MATLAB, based on the code generously provided by the authors of (Gower, Goldfarb, and Richt arik 2016),' but does not provide an explicit statement or link for their own OPDA source code. |
| Open Datasets | Yes | We use datasets from (Chang and Lin 2011), including covtype (N = 581K, D = 54) and rcv1 (N = 20K, D = 47K). |
| Dataset Splits | No | The paper does not provide specific details on dataset splits (e.g., exact percentages, sample counts for training, validation, and test sets, or specific cross-validation setups). |
| Hardware Specification | No | No specific hardware details (such as GPU/CPU models, processor types, or memory amounts) used for running the experiments are provided in the paper. |
| Software Dependencies | No | The paper states, 'First, we implement OPDA in MATLAB,' but does not provide specific version numbers for MATLAB or any other software dependencies. |
| Experiment Setup | Yes | The stepsize η is noted after the prefix. The stepsize is grid-searched for an optimum, whose nearby stepsizes are also tested and plotted. We set each outer iteration to consist of m = N/|Sk| times of inner iteration, during which both OPDA and Proximal SVRG fully scan over the dataset before recalculating the full gradient and updating the reference point. |