Accurate and Interpretable Factorization Machines

Authors: Liang Lan, Yu Geng4139-4146

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that our proposed method efficiently provides accurate and interpretable prediction. We performed extensive experiments to evaluate our proposed algorithm on both synthetic and reallife benchmark datasets.
Researcher Affiliation Academia Liang Lan Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China lanliang@comp.hkbu.edu.hk. Yu Geng Department of Computer Science and Technology, East China Normal University, China gydatoow@163.com
Pseudocode Yes Algorithm 1 Subspace Encoding Factorization Machines
Open Source Code No No explicit statement or link indicating that the source code for the described methodology is publicly available was found.
Open Datasets Yes These five datasets are publicly available at the Libsvm website 3. We report our experimental results in Table (1). The summary of each dataset (i.e., number of samples, number of features and number of classes) is given in the first column of the table.
Dataset Splits Yes For each dataset, we randomly select 70% as training data and use the remaining 30% as test data. The process is repeated 10 times and we report the average accuracy on test data. The optimal parameter combination is selected by 5-fold cross-validation on training data.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were mentioned.
Software Dependencies No No specific software dependencies with version numbers were explicitly mentioned.
Experiment Setup Yes For all five algorithms, the regularization parameter is chosen from {10-3, 10-2, . . . , 102, 103}. For Libsvm with rfb kernel, the kernel width is chosen from {2-5, 2-4, . . . , 24, 25}. The low-rank parameter m for FM, LLFM and SEFM is chosen from {2, 4, 8, 16, 32, 64}. The parameter b (i.e., the number of bins) of SEFM is chosen from {10, 20, 30, ..., 120}. The optimal parameter combination is selected by 5-fold cross-validation on training data.