Identify Then Recommend: Towards Unsupervised Group Recommendation

Authors: Yue Liu, Shihao Zhu, Tianyuan Yang, Jian Ma, Wenliang Zhong

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority and effectiveness of ITR on both user recommendation (e.g., 22.22% NDCG@5 ) and group recommendation (e.g., 22.95% NDCG@5 ). Furthermore, we deploy ITR on the industrial recommender and achieve promising results.
Researcher Affiliation Collaboration Yue Liu Ant Group National University of Singapore yueliu1990731@163.com Shihao Zhu Ant Group Hangzhou, China Tianyuan Yang Ant Group Hangzhou, China Jian Ma Ant Group Hangzhou, China Wenliang Zhong Ant Group Hangzhou, China
Pseudocode Yes The process of ITR is summarized in Algorithm 1. Algorithm 1 Identify Then Recommend (ITR) Input: user set U; item set V; user-item interaction P; epoch number E; trade-off parameter a, b; range of quantile q. Output: Trained ITR model.
Open Source Code Yes The codes are available on Git Hub2. 2https://github.com/yueliu1999/ITR
Open Datasets Yes Public Benchmark. We conduct experiments on two real-world public datasets, Mafengwo and CAMRa2011 [7].
Dataset Splits No The paper mentions 'train' and 'test' sets, but does not explicitly provide specific details for a 'validation' dataset split (e.g., percentages, sample counts, or explicit methodology for validation set creation). It states 'All results are obtained from three runs' which implies some form of reliability testing but not a distinct validation split for hyperparameter tuning.
Hardware Specification Yes Experimental results are obtained from the server with four core Intel(R) Xeon(R) Platinum 8358 CPUs @ 2.60GHZ, one NVIDIA A100 GPU (40G), and the Py Torch platform.
Software Dependencies No The paper only mentions 'Py Torch platform' without specifying its version or any other software dependencies with version numbers.
Experiment Setup Yes In our ITR model, we set b as 10. And a is set to 0.01 for Mafengwo and 10 for CAMRa2011, respectively. The range of q is set as {0.1, 0.2, 0.3}. The learning rate is set as 0.001 for CAMRa2011 and 0.0001 for Mafengwo, respectively. All results are obtained from three runs.