Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations

Authors: Yasutoshi Ida, Sekitoshi Kanai, Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated the processing time and accuracy of our method on feature selection tasks. We performed experiments on five datasets from the LIBSVM [9] and Open ML [34]: gisette, robert, ledgar, real-sim, and epsilon.
Researcher Affiliation Industry 1NTT Computer and Data Science Laboratories 2NTT Communication Science Laboratories
Pseudocode Yes Algorithm 1 Iterative Hard Thresholding; Algorithm 2 Update of candidate set; Algorithm 3 Update of threshold; Algorithm 4 Fast Iterative Hard Thresholding.
Open Source Code No The NeurIPS Paper Checklist explicitly states 'Answer: [No]' for the question 'Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?'
Open Datasets Yes We performed experiments on five datasets from the LIBSVM [9] and Open ML [34]: gisette, robert, ledgar, real-sim, and epsilon.
Dataset Splits No The paper lists datasets used but does not explicitly provide specific training/validation/test dataset splits (percentages, sample counts, or explicit predefined splits).
Hardware Specification Yes All the experiments were conducted on a 3.20 GHz Intel CPU with six cores and 64 GB of main memory.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes We set step sizes of all the methods η = 1/λ where λ is the largest eigen value of X X by following [23]. We stopped these methods when the relative tolerance of the parameter vector dropped below 10 5 [23, 15].