ESPACE: Accelerating Convolutional Neural Networks via Eliminating Spatial and Channel Redundancy

Authors: Shaohui Lin, Rongrong Ji, Chao Chen, Feiyue Huang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed method is evaluated on Image Net 2012 with implementations on two widelyadopted CNNs, i.e. Alex Net and Goog Le Net. In comparison to several recent methods of CNN acceleration, the proposed scheme has demonstrated new state-of-the-art acceleration performance by a factor of 5.48 and 4.12 speedup on Alex Net and Goog Le Net, respectively, with a minimal decrease in classification accuracy.
Researcher Affiliation Collaboration Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, 361005, China School of Information Science and Engineering, Xiamen University, 361005, China Best Image Lab, Tencent Technology (Shanghai) Co.,Ltd, China
Pseudocode No No structured pseudocode or algorithm blocks were found.
Open Source Code No The paper does not provide an explicit statement about releasing the source code or a direct link to a code repository for the described methodology.
Open Datasets Yes We evaluate on the Image Net 2012 dataset (Deng et al. 2009), which contains more than 1 million training images from 1,000 object classes, and a validation set of 50,000 images.
Dataset Splits Yes We evaluate on the Image Net 2012 dataset (Deng et al. 2009), which contains more than 1 million training images from 1,000 object classes, and a validation set of 50,000 images.
Hardware Specification Yes The proposed ESPACE model is trained (or fine-tuned) using Caffe and run on a 24-core Intel E5-2620 CPU, NVIDIA GTX TITAN X graphics card with 12GB and 32G RAM.
Software Dependencies No The paper mentions 'Caffe' but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes The learning rate starts at 0.0001 and is halved every 10,000 iterations with batch size 100 for Alex Net and 32 for Goog Le Net. The weight decay is set to be 0.0005 and the momentum is set to be 0.95.