Locally Linear Factorization Machines
Authors: Chenghao Liu, Teng Zhang, Peilin Zhao, Jun Zhou, Jianling Sun
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically investigate whether our proposed LLFM-JO method can achieve better performance compared to other state-of-the-art methods which employ LLFM method with unsupervised anchor point learning (LLFMAPL) and predefined local coding scheme (LLFM-DO) on benchmark datasets. Furthermore we examine the efficacy and efficiency of joint optimization. |
| Researcher Affiliation | Collaboration | Chenghao Liu1, Teng Zhang1, Peilin Zhao2, Jun Zhou2, Jianling Sun1 1School of Computer Science and Technology, Zhejiang University, China 2Artificial Intelligence Department, Ant Financial Services Group, China |
| Pseudocode | Yes | Algorithm 1 Local Coding Coordinates (LLC) Optimization Algorithm |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | We conduct our experiments on six public datasets. Table 1 gives a brief summary of these datasets. Dataset #Training #Test #class Banana 3533 1767 2 Magic04 12680 6340 2 IJCNN 49990 91701 2 LETTER 15000 5000 26 MNIST 60000 10000 10 Covtype 387342 193670 2 |
| Dataset Splits | No | Table 1 shows '#Training' and '#Test' splits for each dataset, but does not explicitly mention a separate validation split or dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | For parameter settings, we perform grid search to choose the best parameters for each algorithm on the training set. The paper does not provide concrete hyperparameter values, training configurations, or system-level settings in the main text. |