Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning In-context $n$-grams with Transformers: Sub-$n$-grams Are Near-Stationary Points

Authors: Aditya Varre, Gizem Yüce, Nicolas Flammarion

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 5, we present empirical evidence that illustrates the structural evolution of these models during training and how transitions between training phases are in alignment with the predictions of our theory. In this section, we perform experiments on the disentangled transformer introduced in the previous section to examine the stage-wise learning behavior and analyze the different solutions the transformer learns during different stages of training.
Researcher Affiliation Academia 1Theory of Machine Learning Lab, EPFL, Switzerland. Correspondence to: Aditya Varre <EMAIL>, Gizem Y uce <EMAIL>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It provides mathematical definitions, lemmas, propositions, and experimental setup details in paragraph form.
Open Source Code Yes The code is available at https://github.com/tml-epfl/sub-n-grams-are-stationary.
Open Datasets No We train our model on data generated from a trigram (n = 3) language model over a vocabulary of size S = 5. Each in-context sequence has a length of T = 32, and the transition probabilities are drawn from a uniform Dirichlet prior, Dir(α1), with α = 0.5. The paper describes the process for generating synthetic data rather than using a publicly available dataset with concrete access information.
Dataset Splits Yes Each in-context sequence has a length of T = 32... The test loss is evaluated on a separate set of 216 sequences.
Hardware Specification No The paper describes the model architecture, training parameters (iterations, learning rate, batch size), and data generation, but does not specify any hardware details like GPU or CPU models used for experimentation.
Software Dependencies No The paper mentions using the "Adam optimizer" but does not specify any software library names with version numbers (e.g., PyTorch, TensorFlow, Python version, CUDA) that would be needed for reproducibility.
Experiment Setup Yes We train our model on data generated from a trigram (n = 3) language model over a vocabulary of size S = 5. Each in-context sequence has a length of T = 32, and the transition probabilities are drawn from a uniform Dirichlet prior, Dir(α1), with α = 0.5. The model is the two-layer simplified Transformer analyzed in our theory, with two heads in the first layer and an embedding dimension of d = 5. We use one-hot token embeddings and train for 214 iterations using the Adam optimizer with a constant learning rate of 0.01 and a batch size of 128. No weight decay is used. The test loss is evaluated on a separate set of 216 sequences.