Memory Networks
Authors: Jason Weston, Sumit Chopra, and Antoine Bordes
ICLR 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTS We perform experiments on the QA dataset introduced in Fader et al. (2013). It consists of 14M statements, stored as (subject, relation, object) triples, which are stored as memories in the Mem NN model. ... Results are given in Table 1. |
| Researcher Affiliation | Industry | Jason Weston, Sumit Chopra & Antoine Bordes Facebook AI Research 770 Broadway New York, USA {jase,spchopra,abordes}@fb.com |
| Pseudocode | Yes | Algorithm 1 Ot replacement to arg max when using write time features |
| Open Source Code | No | The paper does not provide an unambiguous statement about releasing its source code, nor does it include a direct link to a code repository for the described methodology. |
| Open Datasets | Yes | We perform experiments on the QA dataset introduced in Fader et al. (2013). ... training combines pseudo-labeled QA pairs... and 35M pairs of paraphrased questions from Wiki Answers |
| Dataset Splits | No | The paper mentions '7k statements and 3k questions from the simulator for training, and an identical number for testing' but does not explicitly describe a validation set or a three-way train/validation/test split for its main experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., Python version, specific library versions like PyTorch or TensorFlow, or solver versions) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | For Mem NNs we fixed the embedding dimension to 100, learning rate to 0.01 and margin γ to 0.1 and 10 epochs of training in all experiments. |