Unsupervised Progressive Learning and the STAM Architecture
Authors: James Smith, Cameron Taylor, Seth Baer, Constantine Dovrolis
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate STAM representations using clustering and classification tasks. We include results on four datasets: MNIST [Lecun et al., 1998] , EMNIST (balanced split with 47 classes) [Cohen et al., 2017] , SVHN [Netzer et al., 2011] , and CIFAR-10 [Krizhevsky et al., 2014]. |
| Researcher Affiliation | Academia | Georgia Institute of Technology { jamessealesmith, cameron.taylor, cooperbaer.seth, constantine}@gatech.edu, |
| Pseudocode | No | The paper describes the architecture components but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Code available at https://github.com/CameronTaylorFL/stam |
| Open Datasets | Yes | We include results on four datasets: MNIST [Lecun et al., 1998] , EMNIST (balanced split with 47 classes) [Cohen et al., 2017] , SVHN [Netzer et al., 2011] , and CIFAR-10 [Krizhevsky et al., 2014] |
| Dataset Splits | No | For each dataset we utilize the standard training and test splits. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions). |
| Experiment Setup | No | The hyperparameter values are tabulated in SM-A. |