One Homonym per Translation
Authors: Bradley Hauer, Grzegorz Kondrak7895-7902
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present a new annotated homonym resource that allows us to test our hypotheses on existing WSD resources. The results of the experiments provide strong empirical evidence for the hypotheses. |
| Researcher Affiliation | Academia | Bradley Hauer, Grzegorz Kondrak Department of Computing Science University of Alberta, Edmonton, Canada {bmhauer, gkondrak}@ualberta.ca |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link to a 'homonym resource' (https://webdocs.cs.ualberta.ca/~kondrak) which is a dataset, not the source code for the methodology described in the paper. |
| Open Datasets | Yes | For testing the OHPD and OHPC hypotheses, we use Sem Cor (Miller et al. 1993), a large sense-annotated English corpus which was created as part of the Word Net project (Petrolito and Bond 2014)." and "Multi Sem Cor (Bentivogli and Pianta 2005), and JSem Cor (Bond et al. 2012). |
| Dataset Splits | No | The paper states "We train IMS on English Sem Cor, and test on the concatenation of five benchmark datasets of (Raganato, Camacho-Collados, and Navigli 2017)" but does not provide specific train/validation/test dataset split percentages or sample counts for Sem Cor, nor explicit details about a validation split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Word Net::Sense Key package' and 'Word Net Mapper' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper describes the general setup for testing hypotheses (e.g., how consistency is measured) and mentions the features used by the IMS system, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings. |