Associative Long Short-Term Memory
Authors: Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, Alex Graves
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments in Section 6 show the beneļ¬ts of the memory system for learning speed and accuracy. |
| Researcher Affiliation | Industry | Ivo Danihelka DANIHELKA@GOOGLE.COM Greg Wayne GREGWAYNE@GOOGLE.COM Benigno Uria BURIA@GOOGLE.COM Nal Kalchbrenner NALK@GOOGLE.COM Alex Graves GRAVESA@GOOGLE.COM Google Deep Mind |
| Pseudocode | No | No structured pseudocode or algorithm blocks are present. The paper describes methods using mathematical equations and prose. |
| Open Source Code | No | No statement about making source code publicly available or links to a code repository. |
| Open Datasets | Yes | We take a sequence of Image Net images (Russakovsky et al., 2015)... |
| Dataset Splits | No | For experiments with synthetic data, we generate new data for each training minibatch, obviating the need for a separate test data set. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory) are provided for the experimental setup. |
| Software Dependencies | No | The paper mentions "Adam optimisation algorithm" but does not provide specific software dependencies with version numbers for replication. |
| Experiment Setup | Yes | All experiments used the Adam optimisation algorithm (Kingma & Ba, 2014) with no gradient clipping. ... Minibatches of size 2 were used in all tasks beside the Wikipedia task below, where the minibatch size was 10. |