Framing RNN as a kernel method: A neural ODE approach

Authors: Adeline Fermanian, Pierre Marion, Jean-Philippe Vert, Gérard Biau

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results are illustrated on simulated datasets. and Section 4 Numerical illustrations.
Researcher Affiliation Collaboration 1 Sorbonne Université, CNRS, Laboratoire de Probabilités, Statistique et Modélisation, LPSM, F-75005 Paris, France {adeline.fermanian, pierre.marion, gerard.biau}@sorbonne-universite.fr 2 Google Research, Brain team, Paris, France jpvert@google.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository.
Open Datasets No The paper mentions using 'simulated datasets' and 'toy task that consists in classifying the rotation direction of 2-dimensional spirals' but does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year, or reference to established benchmark datasets) for them.
Dataset Splits No The paper mentions training on '50 examples' but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper refers to general software frameworks like 'PyTorch' and 'SciPy' in its bibliography, but it does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes We take a feedforward RNN with 32 hidden units and hyperbolic tangent activation. It is trained on 50 examples, with and without penalization, for 200 epochs.