On the Accuracy of Self-Normalized Log-Linear Models
Authors: Jacob Andreas, Maxim Rabinovich, Michael I. Jordan, Dan Klein
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evidence suggests that self-normalization is extremely effective, but a theoretical understanding of why it should work, and how generally it can be applied, is largely lacking. We prove upper bounds on the loss in accuracy due to self-normalization, describe classes of input distributions that self-normalize easily, and construct explicit examples of high-variance input distributions. Our theoretical results make predictions about the difficulty of fitting self-normalized models to several classes of distributions, and we conclude with empirical validation of these predictions. In Figure 5, we present empirical evidence that these bounds correctly characterize the difficulty of self-normalization...5 Experiments: In this section we provide experimental confirmation of these predictions. ... Figure 2a plots the tradeoff between the likelihood gap and the error in the normalizer... Figure 2b shows how the likelihood gap varies as a function of the quantity EKL(pη( |X)||Unif). |
| Researcher Affiliation | Academia | Computer Science Division, University of California, Berkeley {jda,rabinovich,jordan,klein}@cs.berkeley.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | In addition to the synthetic data, we compare our results to empirical data [3] from a self-normalized language model. [3] Devlin, J.; Zbib, R.; Huang, Z.; Lamar, T.; Schwartz, R.; Makhoul, J. Fast and robust neural network joint models for statistical machine translation. Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2014. |
| Dataset Splits | No | The paper mentions generating synthetic data and using empirical data from a language model [3], but it does not specify explicit train/validation/test dataset splits, percentages, or sample counts needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper describes generating synthetic data by introducing a "temperature parameter τ" and fitting a self-normalized model, but it does not provide specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings. |