Holographic Factorization Machines for Recommendation
Authors: Yi Tay, Shuai Zhang, Anh Tuan Luu, Siu Cheung Hui, Lina Yao, Tran Dang Quang Vinh5143-5150
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on nine publicly available datasets for collaborative filtering with explicit feedback. HFM achieves state-of-the-art performance on all nine, outperforming strong competitors such as Attentional Factorization Machines (AFM) and Neural Matrix Factorization (Neu MF). |
| Researcher Affiliation | Academia | 1Nanyang Technological University, Singapore 2University of New South Wales, Australia 3Institute for Infocomm Research, Singapore |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. It provides mathematical equations and architectural diagrams, but no step-by-step algorithmic descriptions in a pseudocode format. |
| Open Source Code | No | The paper states, "We implement all models in Tensorflow5" with footnote 5 linking to "https://www.tensorflow.org/". This is a link to the TensorFlow framework, not the authors' specific implementation code for HFM. There is no other mention of released code or a repository link for their models. |
| Open Datasets | Yes | We conduct extensive experiments on nine publicly available datasets for collaborative filtering with explicit feedback. Netflix is a popular dataset for explicit CF, popularized by the Netflix Prize competition. Movie Lens is another popular benchmark for recommendation. IMDb is another movie-based CF dataset. Amazon Product Reviews is a review rating dataset. Footnotes provide URLs: 1https://www.netflix.com/browse. 2https://grouplens.org/datasets/movielens/ 3https://www.imdb.com/. 4http://jmcauley.ucsd.edu/data/amazon/ |
| Dataset Splits | Yes | For all datasets, we use a time-based split, i.e., we sort all of a user s items by timestamps and withhold the last two as the development and testing sets respectively. |
| Hardware Specification | No | The paper mentions "Due to hardware limitations" and "computational challenge for high end graphic cards" but does not specify any particular GPU models, CPU types, memory, or other specific hardware configurations used for the experiments. |
| Software Dependencies | No | The paper states, "We implement all models in Tensorflow5" with footnote 5 linking to "https://www.tensorflow.org/". However, it does not provide a specific version number for TensorFlow or any other software dependencies like Python, CUDA, or specific libraries. |
| Experiment Setup | Yes | The latent dimensions (embedding size) of all baselines are tuned in the range of {4, 8, 16, 32}. The batch size is set to 1024 in all our experiments. All methods are optimized with Adam (Kingma and Ba 2014) with a learning rate of 0.0003. A dropout of 0.2 is applied to all feed-forward layers. We train each model for a maximum of 50 epochs and compute the score on the held-out set at every epoch. We apply early stopping, i.e., we stop training if performance on the held-out set does not improve after 5 epochs. |