Quantum-Inspired Representation for Long-Tail Senses of Word Sense Disambiguation
Authors: Junwei Zhang, Ruifang He, Fengyu Guo
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We theoretically prove the correctness of the method, and verify its effectiveness under the standard WSD evaluation framework and obtain state-of-the-art performance. Furthermore, we also test on the constructed LTS and the latest cross-lingual datasets, and achieve promising results. |
| Researcher Affiliation | Academia | 1 Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China. 2 College of Computer and Information Engineering, Tianjin Normal University, Tianjin, China. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | Other hyperparameters not listed will be given in the published code. |
| Open Datasets | Yes | To evaluate the effectiveness of QR-WSD, we carried out experiments under two evaluation settings, namely the standardized evaluation setting and the enhanced evaluation setting. The standardized setting includes only Sem Cor2 in the training set; the enhanced setting includes Sem Cor and WNGT3 in the training set. (2http://lcl.uniroma1.it/wsdeval/training-data, 3https://wordnetcode.princeton.edu/glosstag.shtml) |
| Dataset Splits | Yes | Sem Eval07 (SE7; Pradhan et al. (2007)), following convention (Kumar et al. 2019; Blevins and Zettlemoyer 2020), is regarded as the development set. |
| Hardware Specification | Yes | The computing platform of the program is Ubuntu 18.04, which is equipped with two Tesla P40 GPUs. |
| Software Dependencies | Yes | The program is developed based on the Pytorch 1.8 framework and written in Python 3.6. Moreover, Word Net 3.0 is provided by NLTK 3.5, and bert-base-uncased and bert-large-uncased are provided by Transformers 4.5. |
| Experiment Setup | Yes | The learning rate, epoch and batch size of the model are {1e-5, 5e-6}, 20 and 4 respectively. Other hyperparameters not listed will be given in the published code. |