Sparse Convex Optimization via Adaptively Regularized Hard Thresholding

Authors: Kyriakos Axiotis, Maxim Sviridenko

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with real data suggest that ARHT and a variant of OMPR which we call Exhaustive Local Search achieve promising performance in recovering sparse solutions. In this section we evaluate the training performance of different algorithms in the tasks of Linear Regression and Logistic Regression. The results are presented in Figure 1 and Figure 2.
Researcher Affiliation Collaboration 1MIT 2Yahoo! Research. Correspondence to: Kyriakos Axiotis <kaxiotis@mit.edu>, Maxim Sviridenko <sviri@verizonmedia.com>.
Pseudocode Yes Algorithm 1 ARHT core routine; Algorithm 2 ARHT; Algorithm 3 Orthogonal Matching Pursuit with Replacement
Open Source Code No The paper does not include an unambiguous statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We run our experiments on publicly available regression and binary classification datasets, out of which we have presented those on which the algorithms have significantly different performance between each other. In both types of objectives (linear and logistic) we include an intercept term, which is present in all solutions (i.e. it is always counted as +1 in the sparsity of the solution). For consistency, all greedy algorithms (OMPR, ARHT, Exhaustive Local Search) are initialized with the OMP solution of the same sparsity.
Dataset Splits No The paper does not provide specific details about training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined splits).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications, or cloud computing instance types) used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, specific library versions).
Experiment Setup No The paper mentions that an intercept term is included and greedy algorithms are initialized with OMP solutions, but it lacks specific details about hyperparameters (e.g., learning rates, batch sizes, number of epochs, optimizer settings) or comprehensive system-level training configurations.