Understanding Users' Budgets for Recommendation with Hierarchical Poisson Factorization

Authors: Yunhui Guo, Congfu Xu, Hanzhang Song, Xin Wang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the proposed model with several state-of-the-art budgetunaware recommendation methods on several realworld datasets. The results show the advantage of uncovering users budgets for recommendation. 6 Empirical Studies We test our model on several large datasets from Amazon.com [Mc Auley and Leskovec, 2013].
Researcher Affiliation Academia Yunhui Guo, Congfu Xu , Hanzhang Song and Xin Wang Institute of Artificial Intelligence, College of Computer Science, Zhejiang University, China
Pseudocode Yes Algorithm 1 The variational inference algorithm of collaborative budget-aware Poisson factorization.
Open Source Code No The paper does not provide any specific links or statements regarding the availability of open-source code for the described methodology.
Open Datasets Yes We test our model on several large datasets from Amazon.com [Mc Auley and Leskovec, 2013].
Dataset Splits Yes Similar to the settings of [Gopalan et al., 2013], we split each dataset into three parts: 70% of the dataset is served as training set, 20% of the dataset is served as test set and the remaining 10% is served as validation set.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory).
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes For CBPF and HPF, we set each Gamma shape and rate hyperparameter to 0.3. For BPR, the learning rate is 0.001 and the regularization parameter is 0.5. For Cli MF, the learning rate is 0.005 and the regularization parameter is 0.01. And we fix the dimension of the latent vectors of all models to 10 for fair comparison.