Imputer: Sequence Modelling via Imputation and Dynamic Programming
Authors: William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, Navdeep Jaitly
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment with two competitive speech tasks, the 82 hours Wall Street Journal (WSJ) (Paul & Baker, 1992) dataset and the 960 hours Libri Speech (Panayotov et al., 2015) dataset. |
| Researcher Affiliation | Industry | 1Google Research, Brain Team, Toronto, Ontario, Canada. 2Work done at Google; currently at The D. E. Shaw Group, New York, New York, USA. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. Figures 1 and 2 are visualizations of procedures, not formal pseudocode. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. There are no explicit statements about code release or links to repositories. |
| Open Datasets | Yes | We experiment with two competitive speech tasks, the 82 hours Wall Street Journal (WSJ) (Paul & Baker, 1992) dataset and the 960 hours Libri Speech (Panayotov et al., 2015) dataset. |
| Dataset Splits | Yes | We report both the decoding strategy and block size experiments for the Libri Speech dev-other split in Figure 5 and Figure 6. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running experiments. It only describes the neural network architecture and training process. |
| Software Dependencies | No | The paper mentions software like Kaldi and Sentence Piece by name and citation, but does not provide specific version numbers for these or other software dependencies used in the experiments (e.g., 'Kaldi (Povey et al., 2011)', 'Sentence Piece (Kudo & Richardson, 2018)'). |
| Experiment Setup | Yes | Our neural network uses 2 layers of convolution each with 11 3 filter size and stride of 2 1. For our WSJ experiments, we use 8 Transformer self-attention layers with 4 attention heads, 512 hidden size, 2048 filter size, dropout rate 0.2 and train for 300k steps. For our Libri Speech experiments, we use 16 Transformer self-attention layers with 4 attention heads, 512 hidden size, 4096 filter size, dropout rate 0.3 and train for 1M steps. |