FR: Folded Rationalization with a Unified Encoder
Authors: Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, YuanKai Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that FR improves the F1 score by up to 10.3% as compared to state-of-the-art methods. Our codes are available at https://github.com/jugechengzi/FR. Section 5: Experiments |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Technology, Huazhong University of Science and Technology 2i Wudao Tech 1{idc_lw, hz_wang, rxli, yuechao, yuankai_zhang}@hust.edu.cn 2jwang@iwudao.tech |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our codes are available at https://github.com/jugechengzi/FR. |
| Open Datasets | Yes | 1) Beer Reviews (Mc Auley et al., 2012) is a multi-aspect sentiment prediction dataset widely used in rationalization (Lei et al., 2016; Yu et al., 2019; Chang et al., 2019; Huang et al., 2021; Yu et al., 2021). 2) Hotel Reviews (Wang et al., 2010) is another multi-aspect sentiment classification dataset. |
| Dataset Splits | No | The paper mentions using a 'test set' but does not provide explicit details about train/validation/test dataset splits (e.g., percentages or sample counts) within the main text. |
| Hardware Specification | Yes | All of the models are implemented with Py Torch and trained on a RTX3090 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number for it or other software dependencies. |
| Experiment Setup | Yes | Following DMR and A2R, we use the 100-dimension Glove (Pennington et al., 2014) as the word embedding and set the hidden dimension of GRU to be 200. We use Adam (Kingma and Ba, 2014) as the optimizer. All the baselines are tuned a lot of times to find the best hyperparameters. |