Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On Teacher Hacking in Language Model Distillation
Authors: Daniil Tiapkin, Daniele Calandriello, Johan Ferret, Sarah Perrin, Nino Vieillard, Alexandre Rame, Mathieu Blondel
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To study this, we propose a controlled experimental setup involving: (i) an oracle LM representing the ground-truth distribution, (ii) a teacher LM distilled from the oracle, and (iii) a student LM distilled from the teacher. Our experiments reveal the following insights. |
| Researcher Affiliation | Collaboration | 1CMAP, Ecole Polytechnique, Palaiseau, France; Work done during an internship at Google Deep Mind. 2Google Deep Mind. |
| Pseudocode | No | The paper describes methods and procedures in narrative text and mathematical formulations but does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide links to a code repository. |
| Open Datasets | Yes | Our experiments use three datasets for training and evaluation: the XSum summarization dataset (Narayan et al., 2018), the WMT-14 en-de translation dataset (Bojar et al., 2014), and the instruction-following dataset Natural Instructions (Mishra et al., 2022; Wang et al., 2022). |
| Dataset Splits | Yes | For the first stage of the training pipeline, where the oracle dataset is build and used for SFT, we use Noracle = 25 000, 50 000, and 100 000 prompts from the XSum, WMT-14 en-de, and Natural Instructions datasets, respectively. For the second stage, which involves the knowledge distillation procedure, we use N = 200 000, 450 000, and 500 000 prompts from these datasets. Both proxy and golden metrics are computed using a held-out validation set of prompts. |
| Hardware Specification | No | The paper mentions the use of T5 models and Flan-T5-XL, but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions language models like T5 and Flan-T5, but does not provide specific software versions for libraries, frameworks, or operating systems used in the experimental setup. |
| Experiment Setup | Yes | The learning rate for optimization is selected via a grid search over {10 4, 3 10 4, 10 3}. The distillation procedure starts from the SFT checkpoints of the teacher and student models. Training is carried out over 50 epochs to analyze long-term convergence behavior. For XSum and WMT-14 en-de, we use batch size B = 32; for Natural Instructions, we use batch size B = 64. We always use temperature sampling with a temperature parameter τ = 1 for generations from any model. Appendix C. Hyperparameter details for summarization & translation tasks. Hyperparameter Value Oracle Dataset Size 100,000 Distillation Dataset Size 200,000 Training Steps 390,625 Batch Size 32 Task XSum Dropout 0.0 Warmup Schedule 100 steps Optimal Learning Rate (LR) 0.0003 Input Length (Tokenized) 1024 Output Length (Tokenized) 128 Softmax Temperature 1.0 |