Counterfactual Invariance to Spurious Correlations in Text Classification
Authors: Victor Veitch, Alexander D'Amour, Steve Yadlowsky, Jacob Eisenstein
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This theory is supported by empirical results on text classification. |
| Researcher Affiliation | Collaboration | Victor Veitch1,2, Alexander D Amour1, Steve Yadlowsky1, and Jacob Eisenstein1 1Google Research 2University of Chicago |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | No | No explicit statement or link providing access to the source code for the methodology was found. The paper mentions 'See supplement for experimental details' but does not confirm code availability. |
| Open Datasets | Yes | We build the experimental datasets using Amazon reviews from the product category Clothing, Shoes, and Jewelry [NLM19]. For an additional test on naturally-occurring confounds, we use the multigenre natural language inference (MNLI) dataset [WNB18]. |
| Dataset Splits | No | The paper mentions training and test data, but does not provide specific train/validation/test split percentages or sample counts in the main text. |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU, or TPU models) used for running experiments were provided. |
| Software Dependencies | No | The paper mentions using BERT but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | No | The paper mentions using BERT and training with a Cross Entropy + λ Regularizer, varying λ. However, specific hyperparameter values (e.g., learning rate, batch size, number of epochs) are not provided in the main text, with details deferred to the supplement. |