Sequential and Diverse Recommendation with Long Tail

Authors: Yejin Kim, Kwangseob Kim, Chanyoung Park, Hwanjo Yu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive online and offline experiments deployed on a commercial platform demonstrate that our models significantly increase diversity while preserving accuracy compared to the state-of-the-art sequential recommendation model, and consequently our models improve user satisfaction.
Researcher Affiliation Collaboration Yejin Kim1 , Kwangseob Kim2 , Chanyoung Park3 and Hwanjo Yu3 1University of Texas Health Science Center at Houston 2Kakao Corp. 3Pohang University of Science and Technology
Pseudocode No The paper does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our implementation is accessible at https://github.com/yejinjkim/seq-div-rec for reproducibility.
Open Datasets No For training, we collect 2.2 million active users click logs during 8 days. The total number of articles is 263,016... We perform offline experiments and online A/B tests on users historical logs on clicking blog articles from a commercial blog platform, Kakao (https://brunch.co.kr). The dataset used is proprietary and not publicly available via a direct link or formal citation.
Dataset Splits Yes In the offline experiments, we split the data into 70% for training, 10% for validation, and 20% for test by user IDs.
Hardware Specification No The paper mentions running 'online A/B tests on our commercial blog platform' and 'on both mobile and desktop sites' but does not provide specific hardware details such as GPU/CPU models or memory specifications used for training or experiments.
Software Dependencies No The paper mentions using 'word2vec model' and 'adaptive subgradient optimizer' but does not specify the version numbers for these or any other software libraries, programming languages, or environments.
Experiment Setup Yes We set the size N of a recommendation list is set to 20. ... we found lemb = 900, lhid = 550 performs best. We set dropout rate as 0.1, mini-batch size as 1024, and the number of epochs as 20. We use adaptive subgradient optimizer.