Symmetric Metric Learning with Adaptive Margin for Recommendation
Authors: Mingming Li, Shuai Zhang, Fuqing Zhu, Wanhui Qian, Liangjun Zang, Jizhong Han, Songlin Hu4634-4641
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three public recommendation datasets demonstrate that SML produces a competitive performance compared with several state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3School of Computer Science and Engineering, The University of New South Wales, Australia |
| Pseudocode | No | The paper describes the proposed method and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is released at Github9. |
| Open Datasets | Yes | We perform extensive experiments on three publicly accessible datasets: Amazon Instant Video2, Yelp Dataset Challenge (Yelp)3, and IMDB4, of which the statistics are summarized in Table 1. ... 2http://jmcauley.ucsd.edu/data/amazon/ 3https://www.yelp.com/dataset/challenge 4http://ir.hit.edu.cn/~dytang/paper/acl2015/dataset.7z |
| Dataset Splits | Yes | To evaluate the recommendation performance, we randomly divide the training set and testing set following the ratio 9: 1 for each dataset. Moreover, 10% records in the training set are selected as the validation set randomly for hyper-parameters selection. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper states 'We implement our model in TensorFlow' but does not specify a version number for TensorFlow or any other key software dependencies. |
| Experiment Setup | Yes | We optimize the proposed SML with the Adam optimizer and tune the learning rate in {0.10, 0.05, 0.01} for different datasets. The embedding size is fixed to 100. The batch size is 512. ... we show the results of all datasets with l=1.0. |