Knowledge-Enhanced Top-K Recommendation in Poincaré Ball
Authors: Chen Ma, Liheng Ma, Yingxue Zhang, Haolun Wu, Xue Liu, Mark Coates4285-4293
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Via a comparison using three real-world datasets with stateof-the-art methods, we show that the proposed model outperforms the best existing models by 2-16% in terms of NDCG@K on Top-K recommendation. |
| Researcher Affiliation | Collaboration | 1 Mc Gill University 2 Huawei Noah s Ark Lab Montreal |
| Pseudocode | Yes | Algorithm 1: Iterative Training Procedure |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | The proposed model is evaluated on three real-world datasets from various domains with different sparsities: Amazon-book, Last-FM and Yelp2018, which are fully adopted from (Wang et al. 2019c). |
| Dataset Splits | Yes | From the training set, 10% of interactions are randomly selected as validation set to tune hyper-parameters. |
| Hardware Specification | Yes | Our experiments are conducted with Py Torch running on GPU machines (NVIDIA Tesla V100). |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number, which is required for a reproducible description of software dependencies. |
| Experiment Setup | Yes | In the experiments, the latent dimension of all the models is set to 64. The parameters for all baseline methods are initialized as in the corresponding papers, and are then carefully tuned to achieve optimal performances. The learning rate is tuned amongst [0.0001, 0.0005, 0.001, 0.005, 0.01], and we search for the coefficient of L2 normalization over the range [0.0001, ..., 0.1]. To prevent overfitting, the dropout ratio is selected from the range [0.0, 0.1, ..., 0.9] for NFM, GM-MC, and KGAT. The dimension of attention network k is tested over the values [16, 32, 64]. |