Adversarial Oracular Seq2seq Learning for Sequential Recommendation

Authors: Pengyu Zhao, Tianxiao Shui, Yuanxing Zhang, Kecheng Xiao, Kaigui Bian

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We examine the performance of AOS4Rec over RNN-based and Transformer-based recommender systems on two large datasets from real-world applications and make comparisons with state-of-the-art methods. Results indicate the accuracy and efficiency of AOS4Rec, and further analysis verifies that AOS4Rec has both robustness and practicability for real-world scenarios.
Researcher Affiliation Academia School of EECS, Peking University, Beijing, China {pengyuzhao, stx pkucs, longo, kecheng, bkg}@pku.edu.cn
Pseudocode No The paper does not contain a pseudocode block or a clearly labeled algorithm.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We evaluate the proposed AOS4Rec with the baseline methods on two datasets from real-world applications. [...] YOOCHOOSE. The YOOCHOOSE dataset contains a collection of sessions encapsulating the click events from users. [...] Movie Lens. We use Movie Lens-20M Dataset, which is a stable benchmark dataset for evaluating performance of recommender systems. We follow the same preprocessing procedure from [Kang and Mc Auley, 2018; Xu et al., 2019].
Dataset Splits Yes We discard users and items with fewer than 4 interactions, and then split the datasets into training sets, validation sets and test sets based on the length of sequences in the datasets, where the second last 20% items of the sequence are used for validation and the last 20% items are used for testing.
Hardware Specification No The paper does not specify the hardware used for experiments (e.g., GPU models, CPU types, or cloud computing instances with their specifications).
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow).
Experiment Setup Yes We employ grid search to find the best settings of hyper-parameters and list the details in Tab. 2. [...] Table 2: Hyper-parameter settings in AOS4Rec. learning rate 1e-3, batch size 128, beam size 5, weight-decay 12.