Reinforced Negative Sampling for Recommendation with Exposure Data

Authors: Jingtao Ding, Yuhan Quan, Xiangnan He, Yong Li, Depeng Jin

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two real-world datasets demonstrate the effectiveness and rationality of our RNS method.
Researcher Affiliation Academia Jingtao Ding1 , Yuhan Quan1 , Xiangnan He2 , Yong Li1 and Depeng Jin1 1 Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University 2 School of Information Science and Technology, University of Science and Technology of China
Pseudocode Yes Algorithm 1: The RNS algorithm.
Open Source Code Yes Our implementation is available at: https://github. com/dingjingtao/Reinforce NS.
Open Datasets Yes We perform experiments on two real-world datasets with both interactions and exposure: Beibei1 is one of the largest Chinese E-commerce websites. ... Zhihu2 is the largest question-and-answer website in China, where users click articles of interest to read. Here we use a public benchmark released in CCIR-2018 Challenge3.
Dataset Splits Yes For hyper-parameters tuning we further hold out the latest session from each user s training data as the validation set. Table 1: Statistics of the evaluation datasets. Dataset User# Item# Train# Val.# Test# Exposure# Beibei 66,450 59,290 1,617,541 73,906 73,208 29,694,415 Zhihu 16,015 45,782 2,433,969 410,736 440,029 6,711,820
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running experiments are mentioned.
Software Dependencies No The paper mentions using 'Adam optimizer' but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes The mini-batch size and embedding size for all methods are set as 1024 and 32, respectively. We search L2 regularizer and learning rate in [10 6,10 5,10 4,10 3,10 2] and [0.0001, 0.0005, 0.001, 0.05, 0.1], respectively, and use Adam optimizer for learning. In addition, the size of negative candidate set, i.e., Ns, is set as 100 and 30 in Beibei and Zhihu, respectively, which is optimal among [10, 20, ..., 150].