Recurrent Kernel Networks

Authors: Dexiong Chen, Laurent Jacob, Julien Mairal

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally show that our approach is well suited to biological sequences, where it outperforms existing methods for protein classification tasks. 4 Experiments We evaluate RKN and compare it to typical string kernels and RNN for protein fold recognition.
Researcher Affiliation Academia Dexiong Chen Inria dexiong.chen@inria.fr Laurent Jacob CNRS laurent.jacob@univ-lyon1.fr Julien Mairal Inria julien.mairal@inria.fr Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France. Univ. Lyon, Université Lyon 1, CNRS, Laboratoire de Biométrie et Biologie Evolutive UMR 5558, 69000 Lyon, France
Pseudocode No The paper describes computational procedures using dynamic programming and equations (e.g., Theorem 1 and Eq. 7) but does not present them in a structured pseudocode or algorithm block.
Open Source Code Yes Pytorch code is provided with the submission and additional details given in Appendix E.
Open Datasets Yes The resulting dataset can be downloaded from http://www.bioinf.jku.at/software/LSTM_protein.
Dataset Splits Yes for each of the 85 tasks, we hold out one quarter of the training samples as a validation set, use it to tune α, gap penalty λ and the regularization parameter µ in the prediction layer.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions 'Pytorch code' but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes The initial learning rate for Adam is fixed to 0.05 and is halved as long as there is no decrease of the validation loss for 5 successive epochs. We fix k to 10, the number of anchor points q to 128 and use single layer CKN and RKN throughout the experiments. We train 100 epochs for each dataset.