Hybrid Curriculum Learning for Emotion Recognition in Conversation
Authors: Lin Yang, YI Shen, Yue Mao, Longjun Cai11595-11603
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on five representative ERC models. Results on four benchmark datasets demonstrate that the proposed hybrid curriculum learning framework leads to significant performance improvements. We conduct experiments on four ERC benchmark datasets. Empirical results show that our proposed hybrid curriculum learning framework can effectively improve the overall performance of various ERC models, including the state-of-the-art. |
| Researcher Affiliation | Industry | Lin Yang*, Yi Shen*, Yue Mao, Longjun Cai Alibaba Group, Beijing, China {yl176562, sy133447, maoyue.my, longjun.clj}@alibaba-inc.com |
| Pseudocode | Yes | Algorithm 1: Training Process with HCL |
| Open Source Code | No | The paper mentions that baseline models have released their source codes, but there is no explicit statement or link indicating that the authors' own source code for the proposed method is publicly available. |
| Open Datasets | Yes | We evaluate our method on the following four published ERC datasets 1: IEMOCAP (Busso et al. 2008), MELD (Poria et al. 2019a), Daily Dialog (Li et al. 2017), Emory NLP (Zahiri and Choi 2018). |
| Dataset Splits | Yes | The detailed statistics of the datasets are reported in Table 1, which includes 'Train Val Test' columns for 'Conversations' and 'Utterances' for each dataset. |
| Hardware Specification | Yes | Our experiments are conducted on a single Tesla V100M32 GPU. |
| Software Dependencies | No | The paper does not specify the versions of software dependencies such as programming languages, libraries, or frameworks used in its implementation. |
| Experiment Setup | No | The paper mentions 'tunable hyperparameters include number of buckets in CC, max training epochs during each baby step, interval steps for training target updating in UC, decay factor in UC' and that 'These hyperparameters are manually tuned on each dataset with hold-out validation', but it does not provide their specific values. |