Ada-Retrieval: An Adaptive Multi-Round Retrieval Paradigm for Sequential Recommendations

Authors: Lei Li, Jianxun Lian, Xiao Zhou, Xing Xie

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on three widely used public datasets, incorporating five powerful sequential recommenders as backbone models. Our results demonstrate that Ada-Retrieval significantly enhances the performance of various base models, with consistent improvements observed across different datasets.
Researcher Affiliation Collaboration Lei Li1, Jianxun Lian2, Xiao Zhou1*, Xing Xie2 1Gaoling School of Artificial Intelligence, Renmin University of China 2Microsoft Research Asia
Pseudocode No The paper describes the model architecture and processes using text and diagrams (Figure 2), but it does not include any formal pseudocode or algorithm blocks (e.g., labeled "Algorithm 1" or a step-by-step formatted procedure).
Open Source Code Yes Our code and data are publicly available at: https://github.com/ll0ruc/Ada Retrieval.
Open Datasets Yes To validate our proposed method across diverse data types, we assess the model using three publicly available benchmark datasets. Beauty and Sports represent subsets of the Amazon Product dataset (Mc Auley et al. 2015)... The Yelp dataset1 is a sizable collection of extended item sequences... 1https://www.yelp.com/dataset
Dataset Splits Yes To facilitate comprehensive model evaluation, we employ the leave-one-out strategy (Kang and Mc Auley 2018; Zhou et al. 2020) for partitioning each user s item sequence into training, validation, and test sets.
Hardware Specification Yes Ada-Retrieval is implemented using Python 3.8 and Py Torch 1.12.1, executed on NVIDIA V100 GPUs with 32GB memory.
Software Dependencies Yes Ada-Retrieval is implemented using Python 3.8 and Py Torch 1.12.1, executed on NVIDIA V100 GPUs with 32GB memory.
Experiment Setup Yes Training parameters include an Adam optimizer with a learning rate of 0.001 and a batch size of 1024. Across all datasets, we set the maximum sequence length to 50, embedding dimension to 64, and training epochs to a maximum of 200. For Ada-Retrieval, we varied hyperparameters T and λ within the ranges [3, 8] and [0.1, 0.9], respectively, with step sizes of 1 and 0.2.