Continual Unsupervised Representation Learning
Authors: Dushyant Rao, Francesco Visin, Andrei Rusu, Razvan Pascanu, Yee Whye Teh, Raia Hadsell
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments, We demonstrate the efficacy of CURL in an unsupervised learning setting with MNIST and Omniglot, We evaluate this using cluster accuracy, We perform an ablation study to gauge the impact of the expansion threshold for continual learning, in terms of cluster accuracy and number of components used, as shown in Figure 3. The results in Table 3 demonstrate that the proposed unsupervised approach can easily and effectively be adapted to supervised tasks, achieving competitive results for both scenarios. |
| Researcher Affiliation | Industry | Dushyant Rao, Francesco Visin, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu, Raia Hadsell Deep Mind London, UK |
| Pseudocode | No | The paper contains mathematical equations and diagrams, but no structured pseudocode or algorithm blocks are present. |
| Open Source Code | Yes | Code for all experiments can be found at https://github.com/deepmind/deepmind-research/. |
| Open Datasets | Yes | For the evaluation we extensively utilise the MNIST (Le Cun et al., 2010) and Omniglot (Lake et al., 2011) datasets, and further information can be found in Appendix B. |
| Dataset Splits | No | The paper mentions 'training' and 'validation' in the context of model components and processes (e.g., 'during training', 'model validation'), but it does not provide specific numerical details (percentages or counts) for dataset splits for training, validation, or testing in the main text. It defers these details to Appendix C. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models, or cloud computing instance types. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., specific programming language versions or library versions). |
| Experiment Setup | No | The paper explicitly states that 'further experimental details can be found in Appendix C.1' and 'full details of the experimental setup can be found in Appendix C.3', indicating that specific experimental setup details, such as hyperparameters or training configurations, are not provided in the main text. |