On the Iteration Complexity of Support Recovery via Hard Thresholding Pursuit

Authors: Jie Shen, Ping Li

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Numerical Study The HTP algorithm has been studied for several years and has found plenty of successful applications. There is also a large volume of empirical study, e.g., Bouchot et al. (2016), showing that HTP performs better in terms of computational efficiency and parameter estimation than compressive sampling matching pursuit (Needell & Tropp, 2009), subspace pursuit (Dai & Milenkovic, 2009), iterative hard thresholding (Blumensath & Davies, 2009), to name a few. Hence, the focus of our numerical study is to verify the theoretical findings in Section 3.
Researcher Affiliation Academia 1Rutgers University, Piscataway, New Jersey, USA.
Pseudocode No The paper outlines the HTP algorithm steps as (HTP1), (HTP2), and (HTP3) using bullet points and mathematical expressions, but it is not presented as a formally labeled "Algorithm" block or "Pseudocode".
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is available.
Open Datasets No The paper describes generating synthetic data for its numerical study ("Data. In order to investigate the performance of HTP with both the exact and inexact solutions, we consider the linear regression model y = A x + σe, where x is a 100-dimensional vector with a tunable sparsity s. The elements in the design matrix A and the noise e are i.i.d. normal variables."), but it does not specify access to a publicly available or open dataset.
Dataset Splits No The paper does not provide specific details about training, validation, or test dataset splits. It mentions generating synthetic data but no splitting methodology for the experiments is described.
Hardware Specification No The paper does not specify any details about the hardware used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers for the tools or libraries used in the numerical study.
Experiment Setup Yes Solvers. We choose the least-squares loss as the proxy function F(x), for which an exact solution can be computed in (HTP3). We also implement the gradient descent (GD) algorithm to approximately solve (HTP3). In order to produce solutions with different accuracy ǫ, we run the GD algorithm with a various number of gradient oracle calls. In this way, we are able to examine how ǫ affects support recovery through the number of oracle calls. Other settings. The step size η in HTP is fixed as η = 1. We use the true sparsity for the sparsity parameter k in (HTP2). For each configuration of sparsity, we generate 100 independent copies of x. Hence, all the experiments are performed with 100 trials.