Sampling-Decomposable Generative Adversarial Recommender
Authors: Binbin Jin, Defu Lian, Zheng Liu, Qi Liu, Jianhui Ma, Xing Xie, Enhong Chen
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively evaluate the proposed algorithm with five real-world recommendation datasets. The results show that SD-GAR outperforms IRGAN by 12.4% and the SOTA recommender by 10% on average. |
| Researcher Affiliation | Collaboration | School of Computer Science and Technology, University of Science and Technology of China School of Data Science, University of Science and Technology of China Microsoft Research Asia |
| Pseudocode | Yes | Algorithm 1 shows the overall procedure of iteratively updating the discriminator and the generator, where the parameters of the generator are randomly initialized. |
| Open Source Code | No | The paper mentions using authors' released code for baselines (CML, IRGAN), but does not state that the source code for SD-GAR is provided or publicly available. |
| Open Datasets | Yes | As shown in Table 1, five publicly available real-world datasets 2 are used for evaluating the proposed algorithm. 2Amazon: http://jmcauley.ucsd.edu/data/amazon; Movie Lens: https://grouplens.org/datasets/movielens; Cite ULike: https://github.com/js05212/citeulike-t; Gowalla: http://snap.stanford.edu/data/loc-gowalla.html; Echonest: https://blog.echonest.com/post/3639160982/million-song-dataset |
| Dataset Splits | Yes | For each user, we randomly sample her 80% ratings into a training set and the rest 20% into a testing test. 10% ratings of the training set are used for validation. |
| Hardware Specification | Yes | our proposed SD-GAR is implemented based on Tensorflow and trained with the Adam algorithm on a linux system (2.10GHz Intel Xeon Gold 6230 CPUs and a Tesla V100 GPU). |
| Software Dependencies | No | The paper mentions "Tensorflow" and "Adam algorithm" but does not specify their version numbers or the version of the "linux system". |
| Experiment Setup | Yes | Unless otherwise specified, the dimension of user and item embeddings is set to 32. The batch size is fixed to 512 and the learning rate is fixed to 0.001. We impose L2 regularization to prevent overfitting and its coefficient is tuned over {0.01, 0.03, 0.05} on the validation set. The number of item sample set for learning the discriminator is set to 5. The number of item and context sample set for learning the generator is set to 64. The temperature T, λX, λY is tuned over {0.1, 0.5, 1}. |