Weighted Channel Dropout for Regularization of Deep Convolutional Neural Network

Authors: Saihui Hou, Zilei Wang8425-8432

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental WCD with VGGNet16, Res Net-101, Inception-V3 are experimentally evaluated on multiple datasets. The extensive results demonstrate that WCD can bring consistent improvements over the baselines.
Researcher Affiliation Academia Saihui Hou, Zilei Wang Department of Automation, University of Science and Technology of China saihui@mail.ustc.edu.cn, zlwang@ustc.edu.cn
Pseudocode Yes Algorithm 1 Weighted Random Selection Input: scorei > 0, maski = 0, i = 1, 2, , N, wrs ratio. Output: maski, i = 1, 2, , N.
Open Source Code No The paper does not contain any explicit statement about releasing its source code or a link to a repository.
Open Datasets Yes CUB-200-2011 (Wah et al. 2011), Stanford Cars (Krause et al. 2013), and Caltech-256 (Griffin, Holub, and Perona 2007) are all well-known public datasets used and properly cited.
Dataset Splits Yes The hyper-parameters including wrs ratio and q are set by cross validation and keep consistent on the similar datasets such as CUB-200-2011 and Stanford Cars.
Hardware Specification Yes All the models are implemented with Caffe (Jia et al. 2014) on Titan-X GPUs.
Software Dependencies No The paper mentions 'Caffe (Jia et al. 2014)' but does not provide a specific version number for Caffe or any other software dependencies.
Experiment Setup Yes The initial learning rate is set to 0.001 and reduces to its 1/10 three times until convergence. Stochastic gradient descent (SGD) is used for the optimization. The hyper-parameters including wrs ratio and q are set by cross validation and keep consistent on the similar datasets such as CUB-200-2011 and Stanford Cars.