Integrating Linguistic Knowledge to Sentence Paraphrase Generation
Authors: Zibo Lin, Ziran Li, Ning Ding, Hai-Tao Zheng, Ying Shen, Wei Wang, Cong-Zhi Zhao8368-8375
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both English and Chinese datasets show that our method significantly outperforms the state-of-the-art approaches in terms of both automatic and human evaluation. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Technology, Tsinghua University 2Tsinghua Shenzhen International Graduate School, Tsinghua University 3School of Electronics and Computer Engineering, Peking University Shenzhen Graduate School 4Giiso Information Technology Co., Ltd |
| Pseudocode | No | The paper describes the model architecture and equations but does not provide pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | 1Code of our model is publicly available at https://github.com/ LINMou Mou Zi Bo/KEPN |
| Open Datasets | Yes | We carry out our experiments on three benchmark datasets, including English datasets Wiki Answers (Fader, Zettlemoyer, and Etzioni 2013) and Quora, as well as Chinese dataset LCQMC (Liu et al. 2018). |
| Dataset Splits | No | Hyper-parameters are tuned on the validation dataset. The paper mentions the use of a validation set but does not provide specific details about its size or how it was split from the main dataset, only providing training and testing set sizes. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like GloVe and Adam optimizer but does not specify their version numbers or any other key software dependencies with specific versions. |
| Experiment Setup | Yes | We set the trade-off parameter α to 0.9, the dropout rate to 0.3 and the learning rate of the optimizer to 1e-5. |