Uncertainty Quantification in CNN Through the Bootstrap of Convex Neural Networks

Authors: Hongfei Du, Emre Barut, Fang Jin12078-12085

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally demonstrate our approach has a much better performance compared to other baseline CNNs and state-of-the-art methods on various image datasets.
Researcher Affiliation Collaboration Hongfei Du 1, Emre Barut 2, Fang Jin 1 1 The George Washington University 2 Amazon.com, Inc.
Pseudocode Yes Algorithm 1 CCNN Bootstrap
Open Source Code No The paper does not contain any statements about making the source code available, nor does it provide any links to a code repository.
Open Datasets Yes We use five datasets: MNIST (Le Cun et al. 1998), noisy MNIST (Basu et al. 2017), fashion MNIST (Xiao, Rasul, and Vollgraf 2017), CIFAR10 (Krizhevsky 2009) and the cats and dogs dataset (Parkhi et al. 2012)
Dataset Splits Yes For the first three datasets... We use 60,000 images for training and 1,000 images for testing. For the Cats and Dogs dataset... We use 10,000 images for training and 1,000 images for testing.
Hardware Specification Yes The CCNN runs on the 16 cores CPU with 64GB RAM, and other classic neural networks run on GPU.
Software Dependencies No The paper mentions neural network architectures like 'Le-Net' and 'VGG16' but does not specify any software libraries (e.g., TensorFlow, PyTorch) or their version numbers used for implementation or experimentation.
Experiment Setup Yes We reduce the size of the training dataset (only 1000 images in the train set and 100 images in test set) and also reduce the training iterations to 5 at each bootstrap. We set the number of bootstraps B = 1000... For the first three datasets, the ensemble method and the bootstrap CNN use the classic CNN, Le-Net, with 3 convolution and 2 fully connected layers, where the numbers of convolution filters are (32,64,128) with a kernel size of (2,2).