Early Syntactic Bootstrapping in an Incremental Memory-Limited Word Learner
Authors: Sepideh Sadeghi, Matthias Scheutz
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate M-WO and M-B in different ambiguous contexts using the datasets described in Table 1 (each dataset consists of 500 trials). These datasets differ from each other in the source and level of their ambiguity. D1 is the least ambiguous, D2 is linguistically more ambiguous than D1, and D3 is visually more ambiguous than D1. |
| Researcher Affiliation | Academia | Sepideh Sadeghi, Matthias Scheutz Computer Science Department Tufts University, Medford MA, USA {sepideh.sadeghi,mscheutz}@tufts.edu |
| Pseudocode | No | The paper describes its models and learning algorithms using prose and mathematical equations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper states that datasets D1-D6 were created by the authors ('a probabilistic generative process to automatically create 500 utterances for D1', 'manually generated the corresponding event representations for each utterance'), but does not provide specific access information (link, DOI, repository) for these generated datasets. |
| Dataset Splits | No | The paper mentions training on D1 and testing on D4, D5, D6 for one-shot learning, and that D1-D3 each consist of '500 trials', but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or counts for internal splits within a dataset like D1, D2, or D3, or how parameters were tuned using validation sets). |
| Hardware Specification | No | The paper discusses 'memory and computational limitations of a learner (e.g., an embodied robot)' but does not provide specific hardware details such as CPU/GPU models or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as programming languages, libraries, or frameworks, with version numbers. |
| Experiment Setup | Yes | We ran M-B on D1, using differ-ent parameter values to find a good set of parameters which are used in all of our simulations with both M-B and M-WO: γ = 0.9, α = 10, κ = 0.1, and β = 1 (used in M-WO only). |