Exploring Length Generalization in Large Language Models

Authors: Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra, Vinay Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, Behnam Neyshabur

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we run careful empirical studies exploring the length generalization capabilities of transformer-based language models. We first establish that naively finetuning transformers on length generalization tasks shows significant generalization deficiencies independent of model scale.
Researcher Affiliation Collaboration Cem Anil 1, 3, Yuhuai Wu2, Anders Andreassen1, Aitor Lewkowycz1 Vedant Misra1, Vinay Ramasesh1, Ambrose Slone1, Guy Gur-Ari1, Ethan Dyer1, Behnam Neyshabur1 1 Google Research, Blueshift Team 2 Google Research 3 University of Toronto, Vector Institute
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it state that the code is open-source.
Open Datasets No The paper describes creating its own synthetic datasets for 'parity' and 'variable assignment' tasks ('The data generation procedure involves randomly generating execution flows...'), but it does not provide concrete access information (e.g., link, DOI, citation) for these datasets to be publicly available.
Dataset Splits Yes We trained the networks until the in-distribution validation accuracy settles (20000 gradient steps for parity and 18000 gradient steps for variable assignment). The training lengths are highlighted in grey.
Hardware Specification No The paper mentions using 'LaMDA 2 decoder-only models' but does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions 'LaMDA 2 decoder-only models' and 'Ada Factor optimizer [11]' but does not provide specific version numbers for these or any other software libraries, frameworks, or programming languages used.
Experiment Setup Yes We use the Ada Factor optimizer [11] during finetuning, and tune the learning rate, batch size and dropout. We trained the networks until the in-distribution validation accuracy settles (20000 gradient steps for parity and 18000 gradient steps for variable assignment). The paper also shows 'lr: 2e-05, bs: 32' as example hyperparameters in Figure 5.