PD-GAN: Adversarial Learning for Personalized Diversity-Promoting Recommendation

Authors: Qiong Wu, Yong Liu, Chunyan Miao, Binqiang Zhao, Yin Zhao, Lu Guan

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that PD-GAN is superior to generate recommendations that are both diverse and relevant.
Researcher Affiliation Collaboration Qiong Wu1,2 , Yong Liu1,2, , Chunyan Miao1,2,3, , Binqiang Zhao4 , Yin Zhao4 and Lu Guan4 1Alibaba-NTU Singapore Joint Research Institute 2The Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) 3School of Computer Science and Engineering, Nanyang Technological University 4Alibaba Group
Pseudocode Yes Algorithm 1 Adversarial Training for PD-GAN
Open Source Code No The paper does not provide an explicit statement or link for the open-sourcing of the code for the described methodology.
Open Datasets Yes We experiment with two public datasets: Movielens (100k)1 and Anime2. The Movielens dataset consists of 100k ratings (1 to 5) from users to movies. It contains 18 explicit categories and each movie may belong to more than one category. The Anime dataset consists of 1 million ratings (1 to 10) from users to animes. It contains 44 explicit categories and each anime may belong to more than one category. 1https://grouplens.org/datasets/movielens/100k/ 2https://www.kaggle.com/Cooper Union/animerecommendations-database
Dataset Splits No For training and testing data splitting, we apply a 4:1 random splitting on the two datasets. The paper mentions training and testing splits but does not explicitly specify a distinct validation split.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions various models and methods but does not provide specific version numbers for software dependencies or libraries used in the implementation.
Experiment Setup Yes For both datasets, we use embedding size of 30. The learning rate is set to 0.01. The parameter α is set to 0.9.