Local plasticity rules can learn deep representations using self-supervised contrastive predictions
Authors: Bernd Illing, Jean Ventura, Guillaume Bellec, Wulfram Gerstner
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Empirical results |
| Researcher Affiliation | Academia | Bernd Illing Jean Ventura Guillaume Bellec Wulfram Gerstner {firstname.lastname}@epfl.ch Department of Computer Science & Department of Life Sciences École Polytechnique Fédérale de Lausanne 1015 Switzerland |
| Pseudocode | No | The paper describes the CLAPP algorithm but does not provide it in a formal pseudocode block or a clearly labeled algorithm section. |
| Open Source Code | Yes | 2Our code is available at https://github.com/EPFL-LCN/pub-illing2021-neurips |
| Open Datasets | Yes | We first consider the STL-10 image dataset [Coates et al., 2011]. |
| Dataset Splits | No | The paper mentions training on the unlabelled part of the STL-10 dataset and evaluating on the test set, but it does not explicitly provide the specific percentages or counts for training, validation, or test splits for the main CLAPP training or the linear classifier training. |
| Hardware Specification | Yes | We use 4 GPUs (NVIDIA Tesla V100-SXM2 32 GB) for data-parallel training, resulting in a simulation time of around 4 days per run. |
| Software Dependencies | No | The paper mentions "Pytorch" in a citation, but it does not specify the version numbers for PyTorch or any other software dependencies used in their experiments. |
| Experiment Setup | Yes | Other hyper-parameters and data-augmentation are taken from Löwe et al. [2019], see Appendix B. We then train a 6-layer VGG-like [Simonyan and Zisserman, 2015] encoder (VGG-6) using the CLAPP rule (Equations 6 8). Training is performed on the unlabelled part of the STL-10 dataset for 300 epochs. |