An Input-aware Factorization Machine for Sparse Prediction

Authors: Yantao Yu, Zhen Wang, Bo Yuan

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on three real-world recommendation datasets are used to demonstrate the effectiveness and mechanism of IFM. Empirical results indicate that IFM is significantly better than the standard FM model and consistently outperforms four stateof-the-art deep learning based methods.
Researcher Affiliation Academia Yantao Yu , Zhen Wang , Bo Yuan* Graduate School at Shenzhen, Tsinghua University {yyt17, z-wang16}@mails.tsinghua.edu.cn, yuanb@sz.tsinghua.edu.cn
Pseudocode No The paper describes the model architecture and learning process, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https://github.com/gulyfish/Input-aware Factorization-Machine
Open Datasets Yes For regression tasks, we evaluate various prediction models on two public datasets: Frappe2 [Baltrunas et al., 2015] and Movie Lens [Harper and Konstan, 2016]. ... For binary classification, we use the Avazu3 dataset, which was published in the contest of Avazu Click-Through Rate Prediction in 2014.
Dataset Splits Yes We follow the data processing details of NFM [He and Chua, 2017] and randomly split instances by 8:1:1 for training, validation and test.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper states 'We implement our method using Tensorflow', but it does not specify the version number of TensorFlow or any other software dependencies.
Experiment Setup Yes To enable a fair comparison, all methods are learned by optimizing the squared loss (Equation 9) using the Adagrad (Learning rate: 0.01). The batch sizes for Frappe and Movie Lens are set to 2048 and 4096, respectively, while the embedding size is set to 256 for all methods. The batch sizes for Avazu is set to 2048 and the embedding size is set to 40. We use a L2 regularization with λ = 0.00001 for NFM, Wide&Deep, Deep FM and IFM. The default setting for the number of neurons per layer in IFM is 256.