A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks

Authors: Nikunj Saunshi, Sadhika Malladi, Sanjeev Arora

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper initiates a mathematical study of this phenomenon for the downstream task of text classification by considering the following questions: (1) What is the intuitive connection between the pretraining task of next word prediction and text classification? (2) How can we mathematically formalize this connection and quantify the benefit of language modeling? For (1), we hypothesize, and verify empirically, that classification tasks of interest can be reformulated as sentence completion tasks, thus making language modeling a meaningful pretraining task. With a mathematical formalization of this hypothesis, we make progress towards (2) and show that language models that are ϵ-optimal in crossentropy (log-perplexity) learn features that can linearly solve such classification tasks with O( ϵ) error, thus demonstrating that doing well on language modeling can be beneficial for downstream tasks. We experimentally verify various assumptions and theoretical findings, and also use insights from the analysis to design a new objective function that performs well on some classification tasks.
Researcher Affiliation Academia Nikunj Saunshi, Sadhika Malladi & Sanjeev Arora Princeton University {nsaunshi,smalladi,arora}@cs.princeton.edu
Pseudocode No The paper does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Link to code: https://github.com/sadhikamalladi/mathematical-exploration-downstream-tasks.
Open Datasets Yes For SST, we use the prompt This movie is when indicated. For AG News, we use the prompt This article is about when indicated. We also test the performance of pre-trained GPT-2 embeddings f and the conditional mean embeddings Φpf on the DBPedia (Auer et al., 2007), Yahoo Answers (Zhang et al., 2015), TREC (Li & Roth, 2002), IMDb (Maas et al., 2011), Customer Review (CR) (Hu & Liu, 2004), and MPQA polarity (Wilson & Wiebe, 2003) datasets in Table 2.
Dataset Splits Yes AG News has 108K train examples, 12K dev examples, 7600 test examples. We split the train set for AG News into train and dev (90-10) and use the same test set as the nonfinetuning experiments. The sentence version of SST-2 has 6,920 train examples (same as non-finetuning), and 810 examples for dev and test each (split the original test set in half).
Hardware Specification No The paper mentions using "parallelization across multiple GPUs" but does not specify any particular GPU models, CPU models, or other hardware specifications.
Software Dependencies No The paper mentions using "Hugging Face (Wolf et al., 2019)" and "the Logistic Regression CV class from the scikit-learn package", and the "Adam (Kingma & Ba, 2014) optimizer" but does not specify version numbers for these software dependencies.
Experiment Setup Yes To select the best hyperparameter configuration, we run a grid search over learning rate and batch size. We train each model for 10 epochs. For all datasets, we test learning rates 5e 5, 1e 4, and 3e 4. For both version of SST-2, we try batch sizes 8, 16, and 32, and for AG News, we try batch sizes 8, 12, and 16.