Graph-based Isometry Invariant Representation Learning

Authors: Renata Khasanova, Pascal Frossard

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments In this section we compare our network to the state-of-the-art transformation-invariant classification algorithms. 5.1. Experimental settings We run experiments with different numbers of layers and parameters. ... 5.2. Performance evaluation Here, we compare TIGra Net to state-of-the art algorithms for transformation-invariant image classification tasks...
Researcher Affiliation Academia 1Ecole Polytechnique F ed erale de Lausanne (EPFL), Lausanne, Switzerland. Correspondence to: Renata Khasanova <renata.khasanova@epfl.ch>, Pascal Frossard <pascal.frossard@epfl.ch>.
Pseudocode No The paper describes the architecture and methods in prose and figures (Fig. 2, Fig. 3, Fig. 4) but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of the source code for the described methodology.
Open Datasets Yes MNIST-012. This is a small subset of the MNIST dataset (Le Cun & Cortes, 2010). ... ETH-80. The dataset (Leibe & Schiele, 2003) contains images of 80 objects that belong to 8 classes.
Dataset Splits Yes MNIST-012. It includes 500 training, 100 validation and 100 test images... Both of these datasets [MNIST-rot/trans] contain 50k training, 3k validation and 9k test images. ...ETH-80...randomly select 2300, 300 of them as the training, validation sets and we use the rest of them for testing.
Hardware Specification Yes we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.
Software Dependencies No For each architecture, the network is trained using back-propagation with Adam (Kingma & Ba, 2014) optimization. This only names an optimization algorithm, not a specific software dependency with a version number. No other software versions are mentioned.
Experiment Setup Yes We run experiments with different numbers of layers and parameters. For each architecture, the network is trained using back-propagation with Adam (Kingma & Ba, 2014) optimization. The exact formulas of the partial derivatives and explanation about the initialization of the network parameters are provided in the supplementary material. ...The details about fully-connected layer parameters are given in the Section 5. ...Table 1. Architectures used for the experiments... SC[Kl, M] is a spectral convolutional layer with Kl filters of degree M, DP[Jl] is a dynamic pooling that retains Jl most important values. S[Kmax] is a statistical layer with Kmax the maximum order of Chebyshev polynomials.