Neural Bootstrapper

Authors: Minsuk Shin, Hyungjoo Cho, Hyun-seok Min, Sungbin Lim

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results show that Neu Boots outperforms other bagging based methods under a much lower computational cost without losing the validity of bootstrapping.
Researcher Affiliation Collaboration Minsuk Shin1 , Hyungjoo Cho2 , Hyun-seok Min3, Sungbin Lim4 Department of Statistics, University of South Carolina1 Department of Transdisciplinary Studies, Seoul National University2 Tomocube Inc.3 Artificial Intelligence Graduate School, UNIST4
Pseudocode Yes Algorithm 1: Training step in Neu Boots. Algorithm 2: Prediction step in Neu Boots.
Open Source Code Yes Our code is open to the public3. https://github.com/sungbinlim/Neu Boots
Open Datasets Yes We apply Neu Boots to image classification tasks on CIFAR and SVHN... To demonstrate the applicability of Neu Boots to different computer vision tasks, we validate Neu Boots on PASCAL VOC 2012 semantic segmentation benchmark [9]
Dataset Splits No The paper mentions using training and test sets but does not provide specific percentages, counts, or explicit descriptions of how datasets were split for training, validation, and testing. While it uses common benchmarks, the split details are not explicitly stated.
Hardware Specification Yes we measure the prediction time by Res Net-34 between Neu Boots and MCDrop on the test set of CIFAR-10 with Nvidia V100 GPUs.
Software Dependencies No The paper mentions using deep convolutional networks and specific architectures (e.g., Res Net-34, Res Net-110, Dense Net-100), but it does not specify software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x).
Experiment Setup Yes All models are trained using SGD with a momentum of 0.9, an initial learning rate of 0.1, and a weight decay of 0.0005 with the mini-batch size of 128. We use Cosine Annealing for the learning rate scheduler. We implement MCDrop and evaluates its performance with dropout rate p = 0.2