Pre-training of Recurrent Neural Networks via Linear Autoencoders
Authors: Luca Pasa, Alessandro Sperduti
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using four well known datasets of sequences of polyphonic music, we show that the proposed pre-training approach is highly effective, since it allows to largely improve the state of the art results on all the considered datasets. |
| Researcher Affiliation | Academia | Luca Pasa, Alessandro Sperduti Department of Mathematics University of Padova, Italy {pasa,sperduti}@math.unipd.it |
| Pseudocode | Yes | Algorithm 1 shows in pseudo-code the main steps of our procedure. |
| Open Source Code | No | The paper does not state that the code for their proposed linear autoencoder pre-training methodology is open-source or provide a link to it. It only mentions using a third-party Theano-based software for RNN training. |
| Open Datasets | Yes | In order to evaluate our pre-training approach, we decided to use the four polyphonic music sequences datasets used in [21] for assessing the prediction abilities of the RNN-RBM model. |
| Dataset Splits | Yes | Each dataset is split in training set, validation set, and test set. Statistics on the datasets, including largest sequence length, are given in columns 2-4 of Table 1. |
| Hardware Specification | Yes | Time in seconds needed to compute pre-training matrices (Pre-) (on Intel c Xeon c CPU E5-2670 @2.60GHz with 128 GB) and to perform training of a RNN with p = 50 for 5000 epochs (on GPU NVidia K20). |
| Software Dependencies | No | The paper mentions 'Theano-based stochastic gradient descent software' but does not provide a specific version number for Theano or any other software dependency. |
| Experiment Setup | Yes | Our pre-training approach (Pre T-RNN) has been assessed by using a different number of hidden units (i.e., p is set in turn to 50, 100, 150, 200, 250) and 5000 epochs of RNN training |