Translations as Additional Contexts for Sentence Classification
Authors: Reinald Kim Amplayo, Kyungjae Lee, Jinyoung Yeo, Seung-won Hwang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our method performs competitively compared to previous models, achieving best classification performance on multiple data sets. |
| Researcher Affiliation | Academia | Yonsei University, Seoul, South Korea Pohang University of Science and Technology, Pohang, South Korea |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code we use in this paper is publicly shared: https: //github.com/rktamplayo/MCFA |
| Open Datasets | Yes | (a) MR4 [Pang and Lee, 2005]: Movie reviews data... (b) SUBJ [Pang and Lee, 2004]: Subjectivity data... (c) CR5 [Hu and Liu, 2004]: Customer reviews... (d) TREC6 [Li and Roth, 2002]: TREC question data set... |
| Dataset Splits | Yes | If not, we use 10-fold cross validation (marked as CV) with random split. ... We perform early stopping using a random 10% of the training set as the development set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'polyglot library' and 'Fast Text pre-trained vectors' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For our CNN, we use rectified linear units and three filters with different window sizes h = 3, 4, 5 with 100 feature maps each... We use dropout... with a dropout rate of 0.5. We also use an l2 constraint of 3... During training, we use mini-batch size of 50. Training is done via stochastic gradient descent over shuffled mini-batches with the Adadelta update rule. |