Learning Instance-wise Sparsity for Accelerating Deep Models

Authors: Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on benchmark datasets and networks demonstrate the effectiveness of the proposed method.
Researcher Affiliation Collaboration 1Huawei Noah s Ark Lab 2School of Computer Science, FEIT, University of Sydney, Australia
Pseudocode No The paper does not contain STRUCTURED PSEUDOCODE OR ALGORITHM BLOCKS (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper does not provide CONCRETE ACCESS TO SOURCE CODE (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes We extensively evaluate our methods on two popular classification datasets: CIFAR-10 [Krizhevsky, 2009] and Imagenet(ILSVRC2012) [Deng et al., 2009].
Dataset Splits No The paper uses well-known datasets (CIFAR-10, ImageNet) and mentions 'CIFAR-10 test set', but does not explicitly provide specific percentages, sample counts, or citations for training, validation, and test splits needed for full reproducibility of the data partitioning, specifically for a validation set.
Hardware Specification No The paper does not provide SPECIFIC HARDWARE DETAILS (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide SPECIFIC ANCILLARY SOFTWARE DETAILS (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes A number of ℓ2,1-norm regularization factors are considered, λ = 0, 1e-6, 1e-7, 1e-8 respectively. We set a global CV threshold as α... and set a drop threshold β [0, 2).