Top-KAST: Top-K Always Sparse Training

Authors: Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, Erich Elsen

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of our approach by showing that it performs comparably to or better than previous works when training models on the established Image Net benchmark, whilst fully maintaining sparsity. In addition to our Image Net results, we also demonstrate our approach in the domain of language modeling...
Researcher Affiliation Collaboration Siddhant M. Jayakumar Deep Mind University College London Razvan Pascanu Deep Mind University College London Jack W. Rae Simon Osindero Erich Elsen
Pseudocode No The paper does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions that the approach can be easily implemented but does not explicitly state that its source code is open or provide a link.
Open Datasets Yes We demonstrate the efficacy of our method at enabling sparse training of models across different modalities (vision and language), model types (convolutions and attention) and different sparsity regimes. We start by demonstrating the efficacy of our method on the Image Net dataset for image classification, where we train a sparse Res Net-50 as in previous works [7, 10]. [...] We try Top-KAST to train language models on two commonly-benchmarked datasets: Enwik8 [24] which is a character-level benchmark derived from the Hutter Prize and Wiki Text-103 [25] which is a wordlevel language model benchmark.
Dataset Splits No The paper refers to training runs and datasets but does not explicitly state the specific training/validation/test dataset splits (e.g., percentages or sample counts) within the provided text.
Hardware Specification No The paper mentions general hardware capabilities (e.g., modern hardware, NVIDIA A100) and CPU for a specific operation, but does not specify the exact hardware (GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'existing machine learning frameworks' but does not provide specific software names with version numbers.
Experiment Setup No The paper states that 'full details of model and hyper-parameters in the appendix B' and 'training hyper-parameters are displayed in Supplementary Section A', but these sections are not provided in the given text.