Interpretable Recommendation via Attraction Modeling: Learning Multilevel Attractiveness over Multimodal Movie Contents
Authors: Liang Hu, Songlei Jian, Longbing Cao, Qingkui Chen
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results show the superiority of MLAM over the state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Advanced Analytics Institute, University of Technology, Sydney 2Institute of Network Computing & Io T, University of Shanghai for Science and Technology 3College of Computer, National University of Defense Technology, China rainmilk@gmail.com, jiansonglei@163.com, longbing.cao@uts.edu.au, chenqingkui@usst.edu.cn |
| Pseudocode | Yes | Algorithm 1 The learning procedure for a mini-batch |
| Open Source Code | Yes | The code for more detail is available at: https://github.com/rainmilk/ijcai18-mlma. |
| Open Datasets | Yes | The experiments are conducted on the real-world movie watch dataset Movie Lens 1M [Harper and Konstan, 2016]. |
| Dataset Splits | No | The paper mentions 'we randomly held out 20% user watch records as the testing set, and the remainder were served as the training set.' It does not explicitly define a separate validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | Our model is implemented using Keras [Chollet, 2015] with TensorFlow as the backend. However, specific version numbers for Keras and TensorFlow are not provided. |
| Experiment Setup | Yes | where the parameter margin needs to be tuned over data. ... we find that α = 4 performs good through our experiments. ... where α = 2 is set throughh experiments. ... where α = 1 is set throughh experiments. |