Exploring Answer Stance Detection with Recurrent Conditional Attention
Authors: Jianhua Yuan, Yanyan Zhao, Jingfang Xu, Bing Qin7426-7433
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on a manually labeled Chinese community QA stance dataset show that RCA outperforms four strong baselines by average 2.90% on macro-F1 and 2.66% on micro-F1 respectively. |
| Researcher Affiliation | Collaboration | 1Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China 2Department of Media Technology and Art, Harbin Institute of Technology, China 3Sogou Technology Inc, Beijing, China |
| Pseudocode | No | The paper describes the model architecture and equations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | We then further evaluate our model on a manually annotated Chinese QA stance dataset, which is publicly available along with the source code at https://github.com/surpriseshelf/Answer Stance. |
| Open Datasets | Yes | We then further evaluate our model on a manually annotated Chinese QA stance dataset, which is publicly available along with the source code at https://github.com/surpriseshelf/Answer Stance. |
| Dataset Splits | Yes | We split one-tenth of the training set for tuning parameters and apply early-stopping according to performance on validation set during training. Table 1: Training 4050 1460 5088, Test 856 1018 1119 |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions 'All models are implemented using Py Torch5.' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | initial learning rate is set to 1e-4. Mini-batch size is set to 8 for all models and dropout of 0.5 is adopted for preventing over-ļ¬tting. The maximum sequence lengths of question and answer are set to 25 and 45 respectively. |