Convolutional Neural Fabrics

Authors: Shreyas Saxena, Jakob Verbeek

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present benchmark results competitive with the state of the art for image classification on MNIST and CIFAR10, and for semantic segmentation on the Part Labels dataset.Experimental results competitive with the state of the art validate the effectiveness of our approach.
Researcher Affiliation Academia Shreyas Saxena Jakob Verbeek INRIA Grenoble Laboratoire Jean Kuntzmann
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We release our Caffe-based implementation at http://thoth.inrialpes.fr/~verbeek/fabrics.
Open Datasets Yes Part Labels dataset. This dataset [10] consists of 2,927 face images from the LFW dataset [8]... MNIST. This dataset [16] consists of 28 28 pixel images... CIFAR10. The CIFAR-10 dataset (http://www.cs.toronto.edu/~kriz/cifar.html) consists of 50k 32 32 training images and 10k testing images in 10 classes.
Dataset Splits Yes Part Labels dataset. ...training, validation and test sets of 1,500, 500 and 927 images, respectively. MNIST. ...standard split of the dataset into 50k training samples, 10k validation samples and 10k test samples. CIFAR10. ...consists of 50k 32 32 training images and 10k testing images... We hold out 5k training images as validation set, and use the remaining 45k as the training set.
Hardware Specification No The paper mentions 'NVIDIA for the donation of GPUs' but does not provide specific model numbers or detailed hardware specifications used for experiments.
Software Dependencies No The paper mentions a 'Caffe-based implementation' but does not provide specific version numbers for Caffe or any other software dependencies.
Experiment Setup Yes We train our fabrics using SGD with momentum of 0.9. After each node in the trellis we apply batch normalization [9], and regularize the model with weight decay of 10 4, but did not apply dropout [30]. We use the validation set to determine the optimal number of training epochs, and then train a final model from the train and validation data and report performance on the test set.