Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Organizing recurrent network dynamics by task-computation to enable continual learning
Authors: Lea Duncker, Laura Driscoll, Krishna V. Shenoy, Maneesh Sahani, David Sussillo
NeurIPS 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Employing a set of tasks used in neuroscience, we demonstrate that our approach successfully eliminates catastrophic interference and offers a substantial improvement over previous continual learning algorithms. |
| Researcher Affiliation | Collaboration | Lea Duncker Gatsby Unit, UCL London, UK EMAIL Laura N. Driscoll Stanford University Stanford, CA EMAIL Krishna V. Shenoy Stanford University Stanford, CA EMAIL Maneesh Sahani Gatsby Unit, UCL London, UK EMAIL David Sussillo Google Brain, Google Inc. Mountain View, CA EMAIL |
| Pseudocode | No | The paper describes the proposed algorithm using mathematical equations (3), (4), and (5), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or a link to open-source code for the described methodology. |
| Open Datasets | Yes | We demonstrate our continual learning approach on a set of tasks previously used for studying multi-task representations in RNNs [9]. |
| Dataset Splits | No | The paper refers to 'test trials' and 'test error' but does not explicitly provide details about training, validation, and test dataset splits, such as percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies or libraries used for the implementation. |
| Experiment Setup | Yes | Networks with recti๏ฌed-linear activation functions were trained on these tasks to minimize the squared error between readouts and target outputs under added L2-norm regularization of network weights and activity. |