Invariant-Equivariant Representation Learning for Multi-Class Data
Authors: Ilya Feige
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate qualitatively compelling representation learning and competitive quantitative performance, in both supervised and semi-supervised settings, versus comparable modelling approaches in the literature with little fine tuning. We carry out experiments on both the MNIST data set (Le Cun et al., 1998) and the Street View House Numbers (SVHN) data set (Netzer et al., 2011). |
| Researcher Affiliation | Industry | Ilya Feige 1Faculty, 54 Welbeck Street, London. Correspondence to: Ilya Feige <ilya@faculty.ai>. |
| Pseudocode | No | The paper describes the model and inference procedures using text and mathematical equations, and includes architectural diagrams, but it does not provide any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about releasing open-source code or provide links to a code repository for the methodology described. |
| Open Datasets | Yes | We carry out experiments on both the MNIST data set (Le Cun et al., 1998) and the Street View House Numbers (SVHN) data set (Netzer et al., 2011). |
| Dataset Splits | Yes | All visualisations in this work, including those in Figure 3, use data from the validation set (5,000 images for MNIST; 3,257 for SVHN). |
| Hardware Specification | No | The paper mentions that 'This work was developed and the experiments were run on the Faculty Platform for machine learning.' but does not provide any specific hardware details such as GPU/CPU models, memory, or processor speeds. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | The number of training epochs are chosen to be 20, 25, 30, and 35, for data set sizes 100, 600, 1000, and 3000, respectively, on MNIST, and 20 and 30 epochs for data set sizes 1000, and 3000, respectively, on SVHN. We use an 8D latent space and mmax = 4 for MNIST, and an 16D latent space and mmax = 10 for SVHN. |