MARTA: Leveraging Human Rationales for Explainable Text Classification

Authors: Ines Arous, Ljiljana Dolamic, Jie Yang, Akansha Bhardwaj, Giuseppe Cuccu, Philippe Cudré-Mauroux5868-5876

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive validation on real-world datasets shows that our framework significantly improves the state of the art both in terms of classification explainability and accuracy. We conduct an extensive evaluation on two real-world datasets
Researcher Affiliation Collaboration 1University of Fribourg, Switzerland, 2armasuisse, Switzerland, 3Delft University of Technology, Netherlands
Pseudocode Yes Algorithm 1: Learning MARTA Parameters
Open Source Code Yes Source code and data are available at https://github.com/eXascaleInfolab/MARTA.
Open Datasets Yes We use two datasets for our experiments: Wiki Tech and Amazon1. Amazon is developed and published by Ram ırez et al. (2019). It contains 400 reviews with ground truth labels about reviews written about books ; this dataset is released with worker s rationales. Source code and data are available at https://github.com/eXascaleInfolab/MARTA.
Dataset Splits Yes We split the datasets into training, validation, and test sets. We use 50% of the data for training and the rest for validation and test with equal split.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as CPU or GPU models, memory, or cloud computing instance types.
Software Dependencies No The paper mentions using "Sci BERT" and "ALBERT" as pre-trained language models, but it does not specify version numbers for these or any other software libraries, programming languages (e.g., Python), or frameworks (e.g., PyTorch, TensorFlow) used in the experiments.
Experiment Setup No While the paper describes the model architecture, data splits, and general learning process (variational inference), it does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or optimizer configurations, which are necessary for full reproducibility.