Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities
Authors: Octavian Ganea, Sylvain Gelly, Gary Becigneul, Aliaksei Severyn
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, our method improves in two different quality metrics over the traditional Linear-Softmax layer in synthetic and real language model experiments, adding little time or memory overhead, while being comparable to the more computationally expensive mixture of Softmaxes. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, ETH Zürich, Switzerland 2Google Brain 3Google Research. Correspondence to: Octavian-Eugen Ganea <octavian.ganea@inf.ethz.ch>, Sylvain Gelly <sylvaingelly@google.com>. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions "We use the AWD-LSTM open source implementation 4. http://github.com/salesforce/awd-lstm-lm" but this refers to a third-party baseline implementation they used, not their own source code for the proposed method (LMS-PLIF). |
| Open Datasets | Yes | Datasets. Following previous work (Mikolov; Inan et al., 2016; Kim et al., 2016; Zoph & Le, 2016), we use the two most popular LM datasets: Penn Tree Bank (Mikolov et al., 2010) and Wiki Text-2 (Merity et al., 2016). |
| Dataset Splits | Yes | Table 1. Single model perplexities on validation and test sets on Penn Treebank and Wiki Text-2 datasets. |
| Hardware Specification | Yes | We also show the training time per epoch when using a single Tesla P100 GPU. |
| Software Dependencies | No | The paper mentions software components like "AWD-LSTM open source implementation" and "stochastic gradient descent (SGD)" but does not provide specific version numbers for these or other ancillary software dependencies like Python, PyTorch, etc. |
| Experiment Setup | Yes | We use embedding dimension 400 for all the models in table 1. For optimization, we use the strategy described in (Merity et al., 2017) consisting of running stochastic gradient descent (SGD) with constant learning rate (20.0) until the cross entropy loss starts stabilizing, and then switching to averaged SGD. |