Towards Efficient Pre-Trained Language Model via Feature Correlation Distillation

Authors: Kun Huang, Xin Guo, Meng Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that our distilled, smaller language models significantly surpass existing KD methods across various NLP tasks.
Researcher Affiliation Industry Kun Huang1 Xin Guo1 Meng Wang1 1Ant Group {hunterkun.hk,darren.wm,bangzhu.gx}@antgroup.com
Pseudocode Yes The implementation details of FCD are presented in Figure 4. Utilizing the output features from both the student and teacher models, denoted as Fs and Ft respectively, our method ensures ease of implementation. Figure 4: The Py Torch implementation of FCD.
Open Source Code No The paper provides a PyTorch implementation snippet (Figure 4) but does not include an explicit statement about open-sourcing the full code or provide a repository link.
Open Datasets Yes We evaluate the efficacy of the proposed FCD on 8 out of 9 tasks from the General Language Understanding Evaluation (GLUE) Wang et al. [2019] benchmark, which consists of 2 single-sample (Co LA and SST-2) and 5 sample-pair (MRPC, QQP, MNLI, QNLI and RTE) classification tasks, and 1 regression task (STS-B).
Dataset Splits Yes We employ a grid search algorithm on the development set to tune the hyper-parameters.
Hardware Specification No The paper mentions 'evaluated on a single GPU' but does not specify the model, type, or any other detailed hardware specifications for running experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies (e.g., 'PyTorch 1.x', 'Python 3.x').
Experiment Setup Yes Specifically, we trained the student model for 3, 5 and 10 epochs, using a batch size of 32. The learning rates we experimented included 2e 5, 1e 5 and 5e 6. For the Co LA task, we extended the training steps to 25 epochs. The parameters α and β from the distillation loss are tuned from {0.1, 0.2, 0.4, 1}, a choice guided by maintaining the different components of the loss in the same order of magnitude. We adopt a cosine decay schedule with a warm-up phase of 5 epochs and utilize the Adam W optimizer with a weight decay of 0.5. The maximum sequence length is set to 64 for single-sample tasks, and 128 for sequence pair tasks.