Learning Term Embeddings for Hypernymy Identification
Authors: Zheng Yu, Haixun Wang, Xuemin Lin, Min Wang
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our approach outperforms other supervised methods on two popular datasets and the learned term embeddings has better quality than existing term distributed representations with respect to hypernymy identification. |
| Researcher Affiliation | Collaboration | Zheng Yu1, Haixun Wang2, Xuemin Lin1,3, Min Wang2 1 East China Normal University, China zyu.0910@gmail.com 2 Google Research, USA, {haixun, minwang}@google.com 3 University of New South Wales, Australia lxue@cse.unsw.edu.au |
| Pseudocode | No | The paper includes a neural network architecture diagram (Figure 1) but no pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We use two datasets, BLESS [Baroni and Lenci, 2011] and ENTAILMENT [Baroni et al., 2012] for evaluation. |
| Dataset Splits | No | The paper describes train and test splits (e.g., 'hold out one target concept and train on the remaining 199 ones'), but does not explicitly mention a separate validation set or split for hyperparameter tuning. |
| Hardware Specification | Yes | We ran a single-threaded program on one machine powered by an Intel Core(TM) i5-2400 3.1-GHz with 8GB memory, running Linux. |
| Software Dependencies | No | The paper mentions that SVM is trained and refers to the Skip-gram Model, but does not provide specific version numbers for software dependencies (e.g., Python, specific libraries, frameworks). |
| Experiment Setup | Yes | SVM is trained using a RBF kernel with γ = 0.03125 and penalty term C = 8.0. |