Channel Gating Neural Networks

Authors: Weizhe Hua, Yuan Zhou, Christopher M. De Sa, Zhiru Zhang, G. Edward Suh

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally show that applying channel gating in state-of-the-art networks achieves 2.7-8.0 reduction in floating-point operations (FLOPs) and 2.0-4.4 reduction in off-chip memory accesses with a minimal accuracy loss on CIFAR-10.
Researcher Affiliation Academia Weizhe Hua wh399@cornell.edu Yuan Zhou yz882@cornell.edu Christopher De Sa cdesa@cornell.edu Zhiru Zhang zhiruz@cornell.edu G. Edward Suh gs272@cornell.edu
Pseudocode No The paper does not contain any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described in this paper.
Open Datasets Yes We first evaluate CGNets only with the activation-wise gate on CIFAR-10 [17] and Image Net (ILSVRC 2012) [4] datasets to compare the accuracy and FLOP reduction trade-off with prior arts.
Dataset Splits No The paper mentions using CIFAR-10 and Image Net datasets but does not explicitly provide specific details about the training, validation, and test splits (e.g., percentages or exact counts) beyond implying standard usage.
Hardware Specification Yes Platform Intel i7-7700k NVIDIA GTX 1080Ti ASIC
Software Dependencies No The paper mentions using 'Mx Net [2]' as a framework but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup Yes We choose a uniform target threshold (T) and number of groups (G) for all CGNets for the experiments in Section 5.1 and 5.2. ...We leverage KD to improve the accuracy of CGNets on Image Net where a Res Net-50 model is used as the teacher of our Res Net-18 based CGNets with κ = 1 and λkd = 0.5.