Introducing Routing Uncertainty in Capsule Networks

Authors: Fabio De Sousa Ribeiro, Georgios Leontidis, Stefanos Kollias

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental we focus on enhancing capsule network properties, and perform a thorough evaluation on pose-aware tasks, observing improvements in performance over previous approaches whilst being more computationally efficient.
Researcher Affiliation Academia Machine Learning Group University of Lincoln, UK {fdesousaribeiro,skollias}@lincoln.ac.uk Department of Computing Science University of Aberdeen, UK georgios.leontidis@abdn.ac.uk
Pseudocode Yes Algorithm 1 Capsule Layer with Routing Uncertainty. Returns updated object capsules cj = {aj, Mj} ℓ+ 1, given part capsules ci = {ai, Mi} ℓ. Performs ML/MAP inference of transformation weights W, and variational inference of latent part-object connection variables z.
Open Source Code Yes 1Code available at: https://github.com/fabio-deep/Routing-Uncertainty-Caps Net
Open Datasets Yes Small NORB [33] consists of grey-level stereo 96 96 images of 5 objects: each given at 18 different azimuths (0-340), 9 elevations and 6 lighting conditions, with 24,300 training and test set examples.
Dataset Splits Yes During training we take 32 32 random crops, and centre crops at test time. We train on training set images with azimuths: Atrain = {300, 320, 340, 0, 20, 40}, denoted as familiar viewpoints, and test on test set images containing novel azimuths: Atest = {60, 80, . . . , 280}. Similarly, for the elevation viewpoints we train on Etrain = {30, 35, 40}, and test on Etest = {45, 50, . . . , 70}.
Hardware Specification No We would like to gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research. (No specific GPU model is mentioned, only 'GPUs'.)
Software Dependencies No Both are readily available in Py Torch and Tensorflow respectively [27, 28]. (Specific version numbers for PyTorch or TensorFlow are not provided.)
Experiment Setup Yes A single 5 5 Conv layer with f0 filters and stride 2 precedes four capsule layers...In all experiments, we use Adam [30] with default parameters and a batch size of 128 for training.