Energy-efficient Amortized Inference with Cascaded Deep Classifiers

Authors: Jiaqi Guan, Yang Liu, Qiang Liu, Jian Peng

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, we demonstrate our method s effectiveness with extensive experiments on CIFAR-10/100, Image Net32x32 and original Image Net dataset.
Researcher Affiliation Academia Jiaqi Guan1,2, Yang Liu2, Qiang Liu3, Jian Peng2 1 Tsinghua University 2 University of Illinois at Urbana-Champaign 3 University of Texas at Austin
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code No The paper does not provide any links or explicit statements about releasing source code.
Open Datasets Yes As a proof of concept, we implement a cascade of deep neural network classifiers on four image classification datasets, including CIFAR-10, CIFAR-100 [Krizhevsky and Hinton, 2009], Image Net32x32 [Chrabaszcz et al., 2017] and original Image Net [Russakovsky et al., 2015] dataset.
Dataset Splits No The CIFAR10 and CIFAR-100 datasets both have 50, 000 training images and 10, 000 test images, with 10 classes and 100 classes respectively. The Image Net32x32 dataset... contains ... 1.28 million training images and 50k test images... While train and test splits are mentioned, specific validation splits or methodologies for obtaining them are not detailed. The phrase 'internal cross-validation within the training data' is too vague to constitute a reproducible split.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for experiments.
Software Dependencies No The paper mentions software components like 'Caffe Model Zoo' and optimization methods, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For all the experiments, we use stochastic gradient descent with a momentum of 0.9 for the policy optimization. The learning rate schedule and the mini-batch size are set to be the same as in the original Res Net [He et al., 2016a] for the gradients associated. The learning rate for the stopping policy module is set to be 0.1 and an exponential decay with a factor of 0.9 is applied every four epochs according to internal cross-validation within the training data.