Non-Parametric Transformation Networks for Learning General Invariances from Data

Authors: Dipan K. Pal, Marios Savvides4667-4674

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of NPTNs on data such as MNIST with extreme transformations and CIFAR10 where they outperform baselines, and further outperform several recent algorithms on ETH-80. They do so while having the same number of parameters.
Researcher Affiliation Academia Dipan K. Pal, Marios Savvides Department of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {dipanp,marioss}@andrew.cmu.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions a 'third party implementation in Py Torch' for Capsule Networks and provides a link to it (https://github.com/dragen1860/Caps Net-Pytorch.git). However, it does not state that the authors' own NPTN code is open-source or provide a link for their implementation.
Open Datasets Yes We demonstrate the efficacy of NPTNs on data such as MNIST with extreme transformations and CIFAR10 where they outperform baselines, and further outperform several recent algorithms on ETH-80. ... We utilize the CIFAR10 dataset. ... We now benchmark NPTNs against other approaches learning invariances on the ETH-80 dataset (Leibe and Schiele 2003). ... For this experiment, we augment MNIST with extreme a) random rotations b) random translations, both in training and testing data...
Dataset Splits No The paper specifies training epochs and learning rate for CIFAR10 and MNIST, and training/testing images for ETH-80, but does not explicitly describe a validation dataset split (e.g., percentages or counts for a separate validation set).
Hardware Specification No The paper mentions 'Given our limited computation resources as this time, the size and depth of the networks that we can train is subsequently limited,' but does not specify any exact hardware models (e.g., GPU, CPU, or memory).
Software Dependencies No The paper mentions using 'standard off-the-shelf deep learning frameworks and libraries' and a 'third party implementation in Py Torch' but does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Training was for 300 epochs with the learning rate being 0.1 and decreased at epoch 150, and 225 by a factor of 10. ... The networks were trained on the 2-pixel shifted MNIST for 50 epochs with a learning rate of 10 3. All other hyperparameters were preserved.