Antonym-Synonym Classification Based on New Sub-Space Embeddings

Authors: Muhammad Asif Ali, Yifang Sun, Xiaoling Zhou, Wei Wang, Xiang Zhao6204-6211

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed model outperforms existing research on antonym synonym distinction in both speed and performance.
Researcher Affiliation Academia 1School of Computer Science and Engineering, UNSW, Australia 2College of Computer Science and Technology, DGUT, China 3Key Laboratory of Science and Technology on Information System Engineering, NUDT, China
Pseudocode No The paper describes algorithms and models in prose and mathematical formulations but does not include explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not provide any explicit statements about making its source code available or include links to a code repository.
Open Datasets Yes For model training, we use an existing dataset previously used by (Schwartz, Reichart, and Rappoport 2015; Roth and Schulte im Walde 2014; Nguyen, Schulte im Walde, and Vu 2016; 2017). It has been accumulated from different sources encompassing Word Net (Fellbaum 1998) and Word Nik2.
Dataset Splits Yes The details of the dataset are given in Table 1, it contains antonym and synonym pairs for three categories (i.e., verbs, adjectives and nouns) in the ratio (1:1). In order to come up with a unanimous platform for comparative evaluation, we use the priorly defined data splits by the existing models to training, test and dev sets. The training data is used to train the Distiller in Phase-I and the classifier in Phase-II. The development data is used for Distiller s parameter tuning. The model performance is reported on the test set.
Hardware Specification Yes All the experiments are performed on Intel Xenon Xeon(R) CPU E5-2640 (v4) with 256 GB main memory and Nvidia 1080Ti GPU.
Software Dependencies No The paper mentions software components like 'XGBoost classifier' and 'Adam-Optimizer' but does not specify their version numbers, nor does it list specific programming language or library versions.
Experiment Setup Yes The dimensionality of each sub-space, (i.e., ANT and SYN ) is set to 60d. The neural network encoders used in the Distiller employ 80 units in the first layer and 60 units in the second layer. We use the Adam-Optimizer (Kingma and Ba 2014) to train the Distiller.