Quantized Memory-Augmented Neural Networks
Authors: Seongsik Park, Seijoon Kim, Seil Lee, Ho Bae, Sungroh Yoon
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we achieved a computation-energy gain of 22 with 8-bit fixed-point and binary quantization compared to the floating-point implementation. Measured on the b Ab I dataset, the resulting model, named the quantized MANN (Q-MANN), improved the error rate by 46% and 30% with 8-bit fixed-point and binary quantization, respectively, compared to the MANN quantized using conventional techniques. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Seoul National University, Seoul 08826, Korea 2Interdisciplinary Program in Bioinformatics, Seoul National University, Seoul 08826, Korea sryoon@snu.ac.kr |
| Pseudocode | No | The paper describes methods through equations and prose but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link to supplementary material in PDF format, but does not contain an unambiguous statement or link for the release of source code for the methodology. |
| Open Datasets | Yes | All experiments were performed on the b Ab I dataset (10k) (Weston et al. 2015) |
| Dataset Splits | No | The paper mentions 'training and validation' error rates in figures and states that 'All experiments were performed on the b Ab I dataset (10k) (Weston et al. 2015), by using a training method and hyperparameters similar to those in (Sukhbaatar et al. 2015).' However, it does not explicitly provide the specific percentages or sample counts for training, validation, or test splits within the paper's main text. |
| Hardware Specification | No | The paper discusses computation-energy consumption and mentions NVIDIA TensorRT, but it does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) required for replication. |
| Experiment Setup | Yes | Details of the model and hyperparameters are provided in Supplementary Table 6 and Table 7. |