Sharing Attention Weights for Fast Transformer

Authors: Tong Xiao, Yinqiao Li, Jingbo Zhu, Zhengtao Yu, Tongran Liu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our approach on ten WMT and NIST Open MT tasks. Experimental results show that it yields an average of 1.3X speedup (with almost no decrease in BLEU) on top of a state-of-the-art implementation that has already adopted a cache for fast inference.
Researcher Affiliation Collaboration 1Northeastern University, Shenyang, China 2Niu Trans Co., Ltd., Shenyang, China 3Kunming University of Science and Technology, Kunming, China 4CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China
Pseudocode Yes Figure 4: Joint learning of MT models and sharing policies 1: Function LEARNTOSHARE (layers, model) 2: while policy {πi} does change do 3: learn a new model given policy {πi} 4: learn a new policy {πi} on layers given model 5: return {πi} & model
Open Source Code No The paper mentions the "Niu Trans toolkit [Xiao et al., 2012]" as something they used, but does not provide a link or statement that their own code for the described methodology is open source or publicly available.
Open Datasets Yes We used all bilingual data provided within the WMT14 English-German task. ... We followed the standard data setting of the bidirectional translation tasks of German English, Finnish-English, Latvian-English, and Russian English. ... We also used parts of the bitext of NIST Open MT12 to train a Chinese-English system.
Dataset Splits Yes We chose newstest 2013 as the tuning data, and newstest 2014 as the test data. ... For tuning, we concatenated the data of newstest 2014-2016. For test, we chose newstest 2017. ... The tuning and test sets were MT06 and MT08.
Hardware Specification Yes All models were trained for 100k steps with a mini-batch of 4,096 tokens on machines with 8 Nvidia 1080Ti GPUs
Software Dependencies No The paper mentions software components like "Adam" optimizer and "Niu Trans toolkit" but does not specify version numbers for these or any other software libraries or frameworks used.
Experiment Setup Yes The Transformer system used in our experiments consisted of a 6-layer encoder and a 6-layer decoder. By default, we set dk = dv = 512 and used 2,048 hidden units in the FFN sub-layers. We used multi-head attention (8 heads) because it was shown to be effective for state-of-the-art performance [Vaswani et al., 2017]. Dropout (rate = 0.1) and label smoothing (ϵls = 0.1) methods were adopted for regularization and stabilizing the training [Szegedy et al., 2016]. We trained the model using Adam with β1 = 0.9, β2 = 0.98, and ϵ = 10 9 [Kingma and Ba, 2015]. The learning rate was scheduled as described in [Vaswani et al., 2017]: lr = d 0.5 min(t 0.5, t 4k 1.5), where t is the step number. All models were trained for 100k steps with a mini-batch of 4,096 tokens. ... For inference, both beam search and batch decoding methods were used (beam size = 4 and batch size = 16). ... θ was tuned on the tuning data, which resulted in an optimal range of [0.3, 0.4] for selfattention and [0.4, 0.5] for encoder-decoder attention.