Stance Classification with Target-specific Neural Attention
Authors: Jiachen Du, Ruifeng Xu, Yulan He, Lin Gui
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluations on both the English and Chinese Stance Detection datasets show that the proposed model achieves the state-of-the-art performance. |
| Researcher Affiliation | Academia | 1 Laboratory of Network Oriented Intelligent Computation, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China 2 Department of Computing, the Hong Kong Polytechnic University, Hong Kong 3 Guangdong Provincial Engineering Technology Research Center for Data Science, Guangzhou, China 4 School of Engineering and Applied Science, Aston University, United Kingdom |
| Pseudocode | No | The paper describes the model architecture and training process in text and with equations, but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | English Dataset. Semeval-2016 Task 6 [Mohammad et al., 2016] released the first dataset for stance detection from English tweets. Chinese Dataset. We use the dataset of NLPCC-2016 Chinese Stance Detection Shared Task. |
| Dataset Splits | No | The paper mentions "5-fold cross validation on the training set" for hyperparameter tuning but does not define a separate, fixed validation split in the dataset tables or text. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions "word2vec" and "Adam" but does not specify version numbers for these or any other software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | The dimension of word and target embeddings are 300 and the size of units in LSTM is 100. Adam is used for our optimization method, and its learning rate is 5e 4, β1 is 0.9, β2 is 0.999, ϵ is 1e 8. All models are trained by mini-batch of 50 instances. |