On the Transitivity of Hypernym-Hyponym Relations in Data-Driven Lexical Taxonomies

Authors: Jiaqing Liang, Yi Zhang, Yanghua Xiao, Haixun Wang, Wei Wang, Pinpin Zhu

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to show the effectiveness of our approach. In this section, we evaluate the effectiveness of our approach. First, we evaluate the effectiveness of the features. Then we evaluate the quality of the hypernym-hyponym pairs predicted by our transitivity inference mechanisms.
Researcher Affiliation Collaboration Jiaqing Liang, Yi Zhang, Yanghua Xiao Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University [...] Haixun Wang Facebook, USA [...] Wei Wang School of Computer Science, Fudan University [...] Pinpin Zhu Xiaoi Research, Shanghai Xiaoi Robot Technology Co. LTD., China.
Pseudocode No The paper describes the proposed methods and features verbally and mathematically but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link regarding the public availability of source code for the described methodology.
Open Datasets Yes We use Word Net to construct more labeled samples. As for our study, we used Word Net 3.0, which contains 82,115 synsets and 84,428 hypernym-hyponym relations among synsets.
Dataset Splits Yes We randomly sample 5k triples a, b, c from our negative and positive labeled dataset (more details are in Section Construction of the Labeled Dataset ), respectively. All performance results (including those in the following experiments) are derived using 10-fold cross validation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, or cloud instance specifications) used for conducting the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their specific versions) used for the experiments.
Experiment Setup No While the paper describes the general approach, models, and features used, it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, number of epochs) or other system-level training settings.