Knowledge-based Word Sense Disambiguation using Topic Models

Authors: Devendra Singh Chaplot, Ruslan Salakhutdinov

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed method on Senseval-2, Senseval-3, Sem Eval-2007, Sem Eval-2013 and Sem Eval-2015 English All-Word WSD datasets and show that it outperforms the state-of-the-art unsupervised knowledge-based WSD system by a significant margin.
Researcher Affiliation Academia Devendra Singh Chaplot, Ruslan Salakhutdinov {chaplot,rsalakhu}@cs.cmu.edu Machine Learning Department School of Computer Science Carnegie Mellon University
Pseudocode No The paper includes a graphical model (Figure 4) and describes a generative process with formulas, but it does not contain pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes For evaluating our system, we use the English all-word WSD task benchmarks of the Sens Eval-2 (Palmer et al. 2001), Sens Eval-3 (Snyder and Palmer 2004), Sem Eval2007 (Pradhan et al. 2007), Sem Eval-2013 (Navigli, Jurgens, and Vannella 2013) and Sem Eval-2015 (Moro and Navigli 2015).
Dataset Splits Yes We use the standardized version of all the datasets and use the same experimental setting as (Raganato, Camacho-Collados, and Navigli 2017) for fair comparison with prior methods.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers needed to replicate the experiment.
Experiment Setup No The paper describes the proposed model and the inference method (Gibbs Sampling) but does not provide specific details on hyperparameters, training configurations, or system-level settings.