Gated Convolutional Networks with Hybrid Connectivity for Image Classification

Authors: Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu12581-12588

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on CIFAR and Image Net datasets show that HCGNet is more prominently efficient than Dense Net, and can also significantly outperform state-of-the-art networks with less complexity.
Researcher Affiliation Academia Chuanguang Yang,1,2 Zhulin An,1 Hui Zhu,1,2 Xiaolong Hu,1,2 Kun Zhang,1,2 Kaiqiang Xu,1,2 Chao Li,1 Yongjun Xu1 1Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China {yangchuanguang, anzhulin, zhuhui, huxiaolong18g, zhangkun17g, xukaiqiang, lichao, xyj}@ict.ac.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that the source code for the described methodology is publicly available or provide a link.
Open Datasets Yes We perform extensive experiments across the three highly competitive image classification datasets: CIFAR-10/100 (Krizhevsky and Hinton 2009), and Image Net (ILSVRC 2012) (Deng et al. 2009).
Dataset Splits Yes Image Net 2012 dataset comprises 1.2 million training images and 50k validation images corresponding to 1000 classes.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes We employ a stochastic gradient descent (SGD) optimizer with momentum 0.9 and batch size 128. Training is regularized by weight decay 1 10 4 and mixup with α = 1 (Zhang et al. 2017). For HCGNet-A1, we train it for 1270 epochs by SGDR (Loshchilov and Hutter 2016) learning rate curve with initial learning rate 0.1, T0 = 10, Tmul = 2.