Latent Relation Language Models
Authors: Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, Graham Neubig7911-7918
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate empirical improvements over both word-based language models and a previous approach that incorporates knowledge graph information. |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University, 2Microsoft Research AI {hiroakih, zeconghu, gneubig}@cs.cmu.edu, Chenyan.Xiong@microsoft.com |
| Pseudocode | Yes | Algorithm 1 Generative Process of LRLM |
| Open Source Code | Yes | Equal Contribution. Code & Data: https://github.com/neulab/lrlm. |
| Open Datasets | Yes | Wiki Facts (Ahn et al. 2016) is a collection of Wikipedia articles restricted to /film/actor domain entities in Freebase (Bollacker et al. 2008). |
| Dataset Splits | Yes | Since official splits for evaluation are not provided, we follow previous work and performed a random split of 80/10/10%. |
| Hardware Specification | No | The paper mentions implementing models in Py Torch and training them, but does not provide specific details on the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | We implement all models in Py Torch (Paszke et al. 2017). |
| Experiment Setup | Yes | Training details and hyperparameters are summarized in Appendix B. |