Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever

Authors: Jin Chen, Defu Lian, Yucheng Li, Baoyun Wang, Kai Zheng, Enhong Chen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct experiments to validate the superior performance of the proposed χIR compared with competitive approaches.
Researcher Affiliation Collaboration 1University of Electronic Science and Technology of China 2University of Science and Technology of China, 3Hisense
Pseudocode Yes The Algorithm 1 sketches the pseudo-code for inbatch importance resampling.
Open Source Code Yes Implementation for BIR and χIR: https://github.com/HERECJ/XIR
Open Datasets Yes The experiments are conducted on the Gowalla, Amazon, Ta-feng, Echonest and Tmall datasets 2, as summarized in Table 1. Datasets: https://recbole.io/cn/dataset_list.html
Dataset Splits No The paper specifies a train/test split (
Hardware Specification Yes The proposed methods, including baseline methods, are implemented with the Py Torch learning framework in a Linux operating system and a Tesla V100 GPU.
Software Dependencies No The paper mentions using 'PyTorch learning framework' and 'Adam optimizer' but does not specify their version numbers or other software dependencies with specific versions.
Experiment Setup Yes For each dataset, the batch size is fixed to 2048 and the learning rate is set to 0.001, where each dataset is training with 100 epochs. The coefficient of the l2-regularization is tuned over {1e 4, , 1e 5, 1e 6}. The hyperparameter λ for χIR is tuned over {0.0, 0.2, 0.5, 0.8, 1.0}