On Layer Normalization in the Transformer Architecture
Authors: Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, Tieyan Liu
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show in our experiments that Pre-LN Transformers without the warm-up stage can reach comparable results with baselines while requiring significantly less training time and hyper-parameter tuning on a wide range of applications. |
| Researcher Affiliation | Collaboration | 1CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technolog, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3Center for Data Science, Peking University, Beijing Institute of Big Data Research 4Key Laboratory of Machine Perception, MOE, School of EECS, Peking University 5Microsoft Research 6College of Computer Science, Nankai University. |
| Pseudocode | No | The paper describes the model architecture and computations mathematically and in tables, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or include a link to a code repository for its methodology. |
| Open Datasets | Yes | We conduct our experiments on two widely used tasks: the IWSLT14 German-to-English (De-En) task and the WMT14 English-to-German (En-De) task. [...] We follow (Devlin et al., 2018) to use English Wikipedia corpus and Book Corpus for pre-training. |
| Dataset Splits | Yes | The training-validation ratio for pre-training is 199:1. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using optimizers like Adam and SGD but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | For Adam, we set lrmax = 5e 4 or 1e 3, and for SGD, we set lrmax = 5e 3 or 1e 3. When the warm-up stage is used, we set Twarmup = 4000 as suggested by the original paper (Vaswani et al., 2017). (...) On the IWSLT14 De-En task, we set the initial learning rate to be 5e 4 and decay the learning rate at the 8-th epoch by 0.1. (...) For the Pre-LN BERT, we use linear learning rate decay starting from 3e 4 without the warm-up stage. |