The least-control principle for local learning at equilibrium
Authors: Alexander Meulemans, Nicolas Zucchet, Seijin Kobayashi, Johannes von Oswald, João Sacramento
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In practice, our principle leads to strong performance matching that of leading gradient-based learning methods when applied to an array of problems involving recurrent neural networks and meta-learning. |
| Researcher Affiliation | Academia | 1Department of Computer Science, ETH Zürich 2Institute of Neuroinformatics, University of Zürich and ETH Zürich {ameulema, nzucchet, seijink, voswaldj, rjoao}@ethz.ch |
| Pseudocode | No | The paper describes processes and dynamics through mathematical equations but does not include any clearly labeled Pseudocode or Algorithm blocks. |
| Open Source Code | Yes | 3Code for all experiments is available at https://github.com/seijin-kobayashi/least-control |
| Open Datasets | Yes | MNIST digit classification task [52] and CIFAR-10 image classification benchmark [60]. |
| Dataset Splits | No | The main body of the paper mentions using MNIST and CIFAR-10 datasets but does not explicitly provide specific training, validation, or test split percentages, sample counts, or formal citations for predefined splits. Details might be in the supplementary material (Section S6), which is not provided for analysis. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications, or cloud instances) used to run the experiments. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8) required to replicate the experiments. |
| Experiment Setup | No | The paper discusses aspects of the experimental setup such as types of controllers and learning rules but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed optimizer settings needed for full reproducibility in the main text. More details are referenced in Section S6 (supplementary materials), which are not provided for analysis. |