Reservoir Computing meets Recurrent Kernels and Structured Transforms

Authors: Jonathan Dong, Ruben Ohana, Mushegh Rafayelyan, Florent Krzakala

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental First, we rigorously prove the convergence of Reservoir Computing towards Recurrent Kernels provided standard assumptions and derive convergence rates in O(1/ N), with N being the number of neurons. We then numerically show convergence is achieved in a large variety of cases and does not occur in practice only when the activation function is unbounded (for instance with Re LU). These techniques are tested on chaotic time series prediction, and they all present comparable results in the large-dimensional setting. We also derive the computational complexities of each algorithm and detail how Recurrent Kernels can be implemented efficiently. In the end, the two acceleration techniques we propose are faster than Reservoir Computing and can tackle equally complex tasks.
Researcher Affiliation Collaboration Jonathan Dong 1,2 Ruben Ohana 1,3 Mushegh Rafayelyan2 Florent Krzakala1,3,4 1Laboratoire de Physique de l Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France 2Laboratoire Kastler Brossel, Ecole Normale Supérieure, Université PSL, CNRS, Sorbonne Université, Collège de France, F-75005 Paris, France 3Light On, F-75002 Paris, France 4Ide PHICS lab, Ecole Polytechnique Fédérale de Lausanne, Switzerland
Pseudocode Yes Algorithm 1: Recurrent Kernel algorithm
Open Source Code Yes A public repository is available at https://github.com/rubenohana/Reservoir-computing-kernels.
Open Datasets Yes The Kuramoto-Sivashinsky (KS) chaotic system is defined by a fourth-order partial derivative equation in space and time [48, 49]. We use a discretized version from a publicly available code [47] with input dimension d = 100. [47] Pantelis R Vlachas, Jaideep Pathak, Brian R Hunt, Themistoklis P Sapsis, Michelle Girvan, Edward Ott, and Petros Koumoutsakos. Forecasting of spatio-temporal chaotic dynamics with recurrent neural networks: A comparative study of reservoir computing and backpropagation algorithms. ar Xiv preprint ar Xiv:1910.05266, 2019. https://github.com/pvlachas/RNN-RC-Chaos/.
Dataset Splits No The paper mentions training and testing sets, but does not explicitly provide details about a separate validation set split (e.g., specific percentages or counts) or a formal cross-validation setup for hyperparameter tuning. It only states "The hyperparameters are found with a grid search," which implies validation but without specifying the split.
Hardware Specification Yes Experiments were run on an NVIDIA V100 16GB.
Software Dependencies No The paper does not provide specific software dependency details with version numbers (e.g., Python, PyTorch, or other libraries with their versions) that would be needed to replicate the experiment environment. It mentions using "high-performance libraries in [40]" but does not specify their versions.
Experiment Setup No The paper mentions that "The hyperparameters are found with a grid search, and the same set is used for RC, SRC, and RK to demonstrate their equivalence." and "We will use Ridge Regression with regularization parameter α to learn Wo." However, it does not provide the specific values for these hyperparameters (e.g., the value of α, or other training parameters like learning rates, epochs, etc.), which are crucial for reproducing the experimental setup.