CFM: Convolutional Factorization Machines for Context-Aware Recommendation
Authors: Xin Xin, Bo Chen, Xiangnan He, Dong Wang, Yue Ding, Joemon Jose
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on three realworld datasets, demonstrating significant improvement of CFM over competing methods for contextaware top-k recommendation. |
| Researcher Affiliation | Academia | 1University of Glasgow 2Shanghai Jiao Tong University 3University of Science and Technology of China |
| Pseudocode | No | The paper describes the model architecture and training process in text and diagrams but does not provide pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at https://github.com/chenboability/CFM |
| Open Datasets | Yes | To evaluate the performance of the proposed CFM model, we conduct comprehensive experiments on three real-world implicit feedback datasets: Frappe3, Last.fm4 and Movie Lens5. ... 3http://baltrunas.info/research-menu/frappe 4http://www.dtic.upf.edu/ ocelma/Music Recommendation Dataset 5https://grouplens.org/datasets/movielens/latest/ |
| Dataset Splits | Yes | We adopt the leave-one-out evaluation to test the performance of models, which has been widely used in literature [He et al., 2017; Yuan et al., 2016; He et al., 2018b]. More specifically, for Last.fm and Movie Lens, the latest transaction of each user is held out for testing and the remaining data is treated as the training set. For the Frappe dataset, because there is no timestamp information so we randomly select one transaction for each specific user context as the test example. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as CPU/GPU models or memory specifications. It only implies the use of a GPU for acceleration: "However, this burden can be largely reduced through GPU acceleration." |
| Software Dependencies | No | The paper mentions that CFM was implemented using TensorFlow, but it does not specify any version numbers for TensorFlow or other software dependencies. |
| Experiment Setup | Yes | To fairly compare the performance of models, we train all of them by optimizing the BPR loss with mini-batch Adagrad [Duchi et al., 2011]. The learning rate is searched between [0.01,0.02,0.05] for all models. The batch size is set as 256. For all models except Pop Rank and FM, we pretrain them using the original FM with 500 iterations. The dropout ratio for NFM, Deep FM, ONCF, and CFM is tuned in [0.1,0.2, ,0.9]. The embedding size and attention factor is set as 64 and 32, respectively. The output channels of CNNbased models (i.e., ONCF and CFM) are set as 32. |