Relevance-Promoting Language Model for Short-Text Conversation
Authors: Xin Li, Piji Li, Wei Bi, Xiaojiang Liu, Wai Lam8253-8260
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on a large Chinese STC dataset demonstrate the superiority of the proposed model on relevance metrics and diversity metrics. |
| Researcher Affiliation | Collaboration | Xin Li,1 Piji Li,2 Wei Bi,2 Xiaojiang Liu,2 Wai Lam1 1Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong 2Tencent AI Lab, Shenzhen, China |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://ai.tencent.com/ailab/nlp/dialogue/. |
| Open Datasets | Yes | We utilize the benchmark STC dataset (Liu et al. 2018) to evaluate the effectiveness of the proposed relevance-promoting transformer language model. |
| Dataset Splits | Yes | We split the dataset such that #train:#dev:#test is 7,024,156:2,000:800. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Training details are provided in the appendix. |