GIRNet: Interleaved Multi-Task Recurrent State Sequence Models
Authors: Divam Gupta, Tanmoy Chakraborty, Soumen Chakrabarti6497-6504
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the superiority of GIRNet using three applications: sentiment classification of code-switched passages, part-of-speech tagging of codeswitched text, and target position-sensitive annotation of sentiment in monolingual passages. In all cases, we establish new state-of-the-art performance beyond recent competitive baselines. These experiments are described in Section 5. |
| Researcher Affiliation | Academia | IIIT Delhi, India, o IIT Bombay, India {divam14038, tanmoy}@iiitd.ac.in, soumen@cse.iitb.ac.in |
| Pseudocode | No | The paper describes the model architecture with equations and diagrams (Figure 1 and Figure 2), but it does not include a dedicated pseudocode block or algorithm. |
| Open Source Code | Yes | 1The code is available at https://github.com/divamgupta/mtl\ girnet |
| Open Datasets | Yes | For the primary task we use the sentiment classification dataset of English-Spanish code-switched sentences (Vilares, Alonso, and G omez-Rodr ıguez 2015). For sentiment classification of English sentences, we use the Twitter dataset provided by Sentistrength2. For sentiment classification of Spanish sentences, we use the Twitter dataset by Villena Roman et al. (2015). For the primary task we use a Hindi-English code-switch dataset provided in a shared task of ICON 16 (Patra, Das, and Das 2018). For the auxiliary dataset of Hindi POS tagging, we use the data released by (Sachdeva et al. 2014). For the auxiliary dataset of English POS tagging, we use the data released in a shared task of Co NLL 2000 (Tjong Kim Sang and Buchholz 2000). For the primary task i.e., target-dependent sentiment classification, we use dataset of the Sem Eval 2014 task 4 (Pontiki et al. 2014). The dataset for its corresponding auxiliary task is Yelp2014. |
| Dataset Splits | No | The paper specifies training and test set sizes for all datasets, but it does not explicitly mention or quantify a separate validation set split or size. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies or their version numbers, such as programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | Yes | We use two LTSMs with 64 hidden units for the auxiliary task of English and Spanish sentiment classification. For the primary RNN which produces the gating signal, we use a bidirectional LSTM with 32 units. The word embeddings are initialized randomly and trained along with the model. |