Unsupervised Feature Learning through Divergent Discriminative Feature Accumulation
Authors: Paul Szerlip, Gregory Morse, Justin Pugh, Kenneth Stanley
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper the quality of its learned features is demonstrated on the MNIST dataset, where its performance confirms that indeed DDFA is a viable technique for learning useful features. |
| Researcher Affiliation | Academia | Paul A. Szerlip and Gregory Morse and Justin K. Pugh and Kenneth O. Stanley Department of EECS (Computer Science Division) University of Central Florida Orlando, FL 32816 {pszerlip,jpugh,kstanley}@eecs.ucf.edu, gregorymorse07@gmail.com |
| Pseudocode | No | The paper describes algorithms and processes in prose but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions that 'The Hyper NEAT setup and parameters can be easily reproduced in full because they are simply the default parameters of the Sharp NEAT 2.0 publicly-available package (Green 2003 2014).' This refers to a third-party tool used, not the authors' own implementation code for DDFA. |
| Open Datasets | Yes | To investigate this question DDFA is trained and tested on the MNIST handwritten digit recognition dataset (Le Cun and Cortes 1998), which consists of 60,000 training images and 10,000 test images. |
| Dataset Splits | Yes | The training and validation procedure mirrors that followed by Hinton, Osindero, and Teh (2006): first training is run on 50,000 examples for 50 epochs to find the network that performs best on a 10,000-example validation set. |
| Hardware Specification | No | The paper states 'Collecting 3,000 features took about 36 hours of computation on 12 3.0 GHz cores.' This mentions the number and speed of cores but does not specify the CPU model, GPU, or other specific hardware components. |
| Software Dependencies | Yes | The Hyper NEAT setup and parameters can be easily reproduced in full because they are simply the default parameters of the Sharp NEAT 2.0 publicly-available package (Green 2003 2014). |
| Experiment Setup | Yes | During the course of evolution, features are selected for reproduction based on their signature s novelty score (sparseness ρ) calculated as the sum of the distances to the knearest neighbors (k = 20)... each individual in the population (size = 100) has a 1% chance of being added to the novelty archive... Those individuals that score above a threshold ρmin = 2,000 are added to the feature list... first training is run on 50,000 examples for 50 epochs to find the network that performs best on a 10,000-example validation set. |