Learning Sparse Distributions using Iterative Hard Thresholding

Authors: Jacky Y. Zhang, Rajiv Khanna, Anastasios Kyrillidis, Oluwasanmi O. Koyejo

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our contributions in this work are algorithmic and theoretical, with proof of concept empirical evaluation. We briefly summarize our contributions below. In addition to our conceptual and theoretical results, we present empirical studies that support our claims.
Researcher Affiliation Academia Jacky Y. Zhang Department of Computer Science University of Illinois at Urbana-Champaign yiboz@illinois.edu Rajiv Khanna Department of Statistics University of California at Berkeley rajivak@berkeley.edu Anastasios Kyrillidis Department of Computer Science Rice University rajivak@berkeley.edu Oluwasanmi Koyejo Department of Computer Science University of Illinois at Urbana-Champaign sanmi@illinois.edu
Pseudocode Yes Algorithm 1 Distribution IHT; Algorithm 2 Greedy Sparse Projection (GSProj); Algorithm 3 Greedy Selection
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We study representative prototype selection for the Digits data [31].
Dataset Splits No While the paper mentions using a "test dataset" for the Digits data, it does not specify concrete training or validation splits (e.g., exact percentages, sample counts, or clear citations to predefined splits for all partitions).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Initial step size µ = 0.008.