NeuRec: On Nonlinear Transformation for Personalized Ranking
Authors: Shuai Zhang, Lina Yao, Aixin Sun, Sen Wang, Guodong Long, Manqing Dong
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four real-world datasets demonstrated their superior performances on personalized ranking task. |
| Researcher Affiliation | Academia | 1 University of New South Wales, 2 Nanyang Technological University, 3 Griffith University 4 University of Technology Sydney |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states 'We implemented our proposed model based on Tensorflow3' with a footnote linking to the TensorFlow website, but it does not provide an explicit link to its own source code repository or state that its code is open source. |
| Open Datasets | Yes | We conduct experiments on four real-world datasets: Movielens Het Rec, Movielens 1M, Film Trust and Frappe. The two Movielens datasets1 are collected by Group Lens research[Harper and Konstan, 2015]. Movielens Het Rec is released in Het Rec 20112. Film Trust is crawled from a movie sharing and rating website by Guo et al. [Guo et al., 2013]. Frappe [Baltrunas et al., 2015] is an Android application recommendation dataset. |
| Dataset Splits | No | The paper states, 'We use 80% user-item pairs as training data and hold out 20% as the test set, and estimate the performance based on five random train-test splits.' It also mentions 'We do grid search to determine the hyper-parameters.' While hyperparameter tuning implies a validation process, a specific validation split percentage is not explicitly stated. |
| Hardware Specification | Yes | We implemented our proposed model based on Tensorflow3 and tested it on a NVIDIA TITAN X Pascal GPU. |
| Software Dependencies | No | The paper mentions 'implemented our proposed model based on Tensorflow3' but does not specify a version number for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | For all the datasets, we implement a five hidden layers neural network with constant structure for the neural network part of Neu Rec and use sigmoid as the activation function. For ML-Het Rec, we set the neuron number of each layer to 300, latent factor dimension k to 50 and dropout rate to 0.03; For ML-1M, neuron number is set to 300, k is set to 50, and dropout rate is set to 0.03. The neuron size for Film Trust is set to 150 and k is set to 40. We do not use dropout for this dataset; For Frappe, neuron size is set to 300, k is set to 50 and dropout rate is set to 0.03. We set the learning rate to 1e 4 for ML-Het Rec, ML-1M and Frappe. The learning rate for Film Trust is 5e 5. For ML-Het Rec, ML-1M and Film Trust, we set the regularization rate to 0.1, and that for Frappe is set to 0.01. |