LowFER: Low-rank Bilinear Pooling for Link Prediction

Authors: Saadullah Amin, Stalin Varanasi, Katherine Ann Dunfield, Günter Neumann

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we evaluate on real-world datasets, reaching on par or state-of-the-art performance. At extreme low-ranks, model preserves the performance while staying parameter efficient.
Researcher Affiliation Academia 1German Research Center for Artificial Intelligence (DFKI), Saarbr ucken, Germany 2Department of Language Science and Technology, Saarland University, Saarbr ucken, Germany.
Pseudocode No The paper provides mathematical equations and figures illustrating the model, but it does not contain a structured pseudocode block or algorithm section.
Open Source Code No The authors state: "The authors would like to thank the anonymous reviewers for helpful feedback and gratefully acknowledge the use of code released by Balaˇzevi c et al. (2019a)." This refers to a third-party's code, not their own source code for the described methodology.
Open Datasets Yes We conducted the experiments on four benchmark datasets: WN18 (Bordes et al., 2013), WN18RR (Dettmers et al., 2018), FB15k (Bordes et al., 2013) and FB15k-237 (Toutanova et al., 2015) (see Appendix B for the details, including best hyperparameters and additional experiments).
Dataset Splits No The paper mentions creating a "training set D" and discusses "test set results," but it does not specify explicit train/validation/test dataset splits (e.g., percentages or sample counts for each split) within the main text.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper does not provide specific version numbers for any software components or libraries used in the experiments.
Experiment Setup Yes To train the Low FER model, we follow the setup of Balaˇzevi c et al. (2019a). ... For de = 200 and dr = 30, we vary k from {1, 5, 10, 30, 50, 100, 150, 200} on FB15k... ... we trained our models on FB15k, with dr = 30, k = 50 constant, and varying de in {30, 50, 100, 150, 200, 250, 300, 350, 400}. ... with dr = 50 at k = 150 and l2-regularization 0.0005.