Towards Trustworthy Explanation: On Causal Rationalization
Authors: Wenbo Zhang, Tong Wu, Yunlong Wang, Yong Cai, Hengrui Cai
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The superior performance of the proposed causal rationalization is demonstrated on real-world review and medical datasets with extensive experiments compared to state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1Department of Statistics, University of California Irvine, California, USA 2Advanced Analytics, IQVIA, Pennsylvania, USA. |
| Pseudocode | Yes | Algorithm 1 Causal Rationalization |
| Open Source Code | Yes | Our code is publicly available online.1 |
| Open Datasets | Yes | Beer Review Data. We use the publicly available version of the Beer review dataset also adopted by Bao et al. (2018) and Chen et al. (2022). |
| Dataset Splits | Yes | We follow the same train/validation/test split as Chen et al. (2022) and it is summarized in Table 5. |
| Hardware Specification | Yes | All of our experiments are conducted with PyTorch on 4 V100 GPU. |
| Software Dependencies | No | The paper mentions "PyTorch" and "BERT-base-uncased" but does not specify their version numbers. |
| Experiment Setup | Yes | For all experiments, we utilize a batch size of 256 and choose the learning rate α {1e-5, 5e-4, 1e-4}. We train for 10 epochs all the datasets. For training the causal component, we tune the values of the Lagrangian multiplier µ {0.01, 0.1, 1} and set k = 5. We set the temperature of Gumbel-softmax to be 0.5. |