Assessing Social and Intersectional Biases in Contextualized Word Representations

Authors: Yi Chern Tan, L. Elisa Celis

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we analyze the extent to which state-of-the-art models for contextual word representations, such as BERT and GPT-2, encode biases with respect to gender, race, and intersectional identities. Towards this, we propose assessing bias at the contextual word level. This novel approach captures the contextual effects of bias missing in context-free word embeddings, yet avoids confounding effects that underestimate bias at the sentence encoding level. We demonstrate evidence of bias at the corpus level, find varying evidence of bias in embedding association tests, show in particular that racial bias is strongly encoded in contextual word models, and observe that bias effects for intersectional minorities are exacerbated beyond their constituent minority identities.
Researcher Affiliation Academia Yi Chern Tan, L. Elisa Celis Yale University {yichern.tan, elisa.celis}@yale.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes We use Py Torch, as well as the framework and code from May et al. [21], to conduct the experiments 6. [footnote 6: https://github.com/W4ngatang/sent-bias]
Open Datasets Yes BERT [10] was trained on Wikipedia (2,500M words) 1 and Books Corpus (800M words) [36]. ELMo [25] was trained on the 1 Billion Word Benchmark (1,000M words) [7]. GPT [26] was trained on Books Corpus, and GPT-2 was trained on Web Text [27]. Extracted using https://github.com/attardi/wikiextractor on the May 4 Wikipedia dump. https://github.com/openai/gpt-2-output-dataset
Dataset Splits No The paper evaluates existing pre-trained models (BERT, GPT-2, ELMo, etc.) for bias. It does not describe a process of training new models or re-training existing ones with specified training/validation/test splits for its own experimental setup.
Hardware Specification No The paper does not provide specific hardware details (such as GPU/CPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper states 'We use Py Torch' in Section 5.1 but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup No The paper evaluates existing pre-trained contextual word models and their biases. It does not describe the training process for these models or provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, epochs) that would be needed to reproduce model training.