Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
AttCAT: Explaining Transformers via Attentive Class Activation Tokens
Authors: Yao Qiang, Deng Pan, Chengyin Li, Xin Li, Rhongho Jang, Dongxiao Zhu
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted to demonstrate the superior performance of Att CAT, which generalizes well to different Transformer architectures, evaluation metrics, datasets, and tasks, to the baseline methods. |
| Researcher Affiliation | Academia | Department of Computer Science, Wayne State University EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at: https://github.com/qiangyao1988/Att CAT. |
| Open Datasets | Yes | Datasets: We evaluate the performance using the following exemplar tasks: sentiment analysis on SST2 [38] , Amazon Polarity, Yelp Polarity [39], and IMDB [40] data sets; natural language inference on MNLI [41] data set; paraphrase detection on QQP [42] data set; and question answering on SQu ADv1 [43] and SQu ADv2 [44] data sets. |
| Dataset Splits | No | The paper mentions using "Entire test sets" for evaluation and a "subset with 2,000 randomly selected samples" for some datasets, but does not specify the train/validation/test split percentages or sample counts used for model training or evaluation in a reproducible manner in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions using "pre-trained models from Huggingface" but does not provide specific software names with version numbers for reproducibility (e.g., "PyTorch 1.9" or "Python 3.8"). |
| Experiment Setup | No | The paper describes the Transformer models (BERT, Distill BERT, RoBERTa) and evaluation metrics used, but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or training configurations for reproducibility in the main text. |