Dialog-based Language Learning
Authors: Jason E. Weston
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher s response. |
| Researcher Affiliation | Industry | Jason Weston Facebook AI Research, New York. jase@fb.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | No | The paper states that the datasets are available for download at http://fb.ai/babi, but it does not provide concrete access to the source code for the methodology described in the paper. |
| Open Datasets | Yes | In our experiments we constructed the ten supervision tasks for the two datasets which are all available for download at http://fb.ai/babi. |
| Dataset Splits | Yes | In all cases a training, validation and test set is provided. For the b Ab I dataset this consists of 1000, 100 and 1000 questions respectively per task, and for movie QA there are 96k, 10k and 10k respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'end-to-end memory network (Mem N2N)' but does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | No | The paper states 'Hyperparameters for all methods are optimized on the validation sets' but does not provide concrete hyperparameter values, training configurations, or system-level settings in the main text. |