Learning the Irreducible Representations of Commutative Lie Groups

Authors: Taco Cohen, Max Welling

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train the model on pairs of transformed image patches, and show that the learned invariant representation is highly effective for classification. We trained a TSA model with 100 filters on a stream of 250.000 16 16 image patches x(t), y(t). We tested the utility of the model for invariant classification on a rotated version of the MNIST dataset, using a 1-Nearest Neighbor classifier. The results in fig. 4 show that TD outperforms ED, but is outperformed by ˆκ and MD by a large margin.
Researcher Affiliation Academia Machine Learning Group, University of Amsterdam
Pseudocode No The paper describes mathematical formulations and derivations but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to code repositories for the described methodology.
Open Datasets Yes We tested the utility of the model for invariant classification on a rotated version of the MNIST dataset.
Dataset Splits No The paper mentions "60k training examples and 10k testing examples" but does not specify a separate validation set or detailed split percentages that would allow for full reproduction of data partitioning.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers required to replicate the experiment.
Experiment Setup Yes We trained a TSA model with 100 filters on a stream of 250.000 16 16 image patches x(t), y(t). The learning rate α was initialized at α0 = 0.25 and decayed as α = α0/√T, where T was incremented by one with each pass through the data. Each minibatch consisted of 100 data pairs. We tested the utility of the model for invariant classification on a rotated version of the MNIST dataset, using a 1-Nearest Neighbor classifier. Each digit was rotated by a random angle and rescaled to 16 16 pixels.