Discrete Trust-aware Matrix Factorization for Fast Recommendation

Authors: Guibing Guo, Enneng Yang, Li Shen, Xiaochun Yang, Xiaodong He

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition, experiments on two real-world datasets demonstrate the superiority of our approach against other state-of-the-art approaches in terms of ranking accuracy and efficiency.
Researcher Affiliation Collaboration Guibing Guo1 , Enneng Yang1 , Li Shen2 , Xiaochun Yang1 and Xiaodong He3 1Northeastern University, China 2Tencent AI Lab, China 3JD AI Research, China
Pseudocode Yes Algorithm 1 Discrete Trust-aware Matrix Factorization Input: S: ratings matrix; Γ: trusts matrix; r: code length Output: user,item binary code:B, D 1: Initialization: B, W, X, Z Rr m and D, Y Rr n 2: while not converge do 3: for i {1, . . . , m} do 4: repeat 5: use Eq.(3) to update Bi bit by bit (r bits in total) 6: until bit converge 7: end for 8: for j {1, . . . , n} do 9: repeat 10: use Eq.(4) to update Dj bit by bit (r bits in total) 11: until bit converge 12: end for 13: for k {1, . . . , m} do 14: repeat 15: use Eq.(5) to update Wk bit by bit (r bits in total) 16: until bit converge 17: end for 18: update X by Eq.(6) 19: update Y by Eq.(7) 20: update Z by Eq.(8) 21: end while 22: return B, D to evaluate
Open Source Code No The paper does not explicitly provide a link to the source code or state that the code is publicly available.
Open Datasets Yes Two real-world datasets are used in our experiments, namely Epinions2 and Douban3. 2http://www.trustlet.org/wiki/Epinions dataset 3https://www.cse.cuhk.edu.hk/irwin.king.new/pub/data/douban
Dataset Splits No The paper states 'for each user, we randomly selected 50% as training data and the rest as test data.' It does not explicitly mention a validation split.
Hardware Specification No The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for the experiments. It only mentions 'The experiments are executed on Douban due to its greater size.'
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes Parameter settings The parameters of all the methods are either determined by empirical study or suggested by the original paper. Specifically, for BCCF, we tune hyper-parameters λ within [10 4, . . . , 10 2]. The hyper-parameters α and β of DCF are tuned with [10 4, . . . , 102]. For Trust MF, we adopt the parameter settings recommended by the authors: λ = 0.001 and λT = 1. For DTMF proposed in this paper, we search α, β, γ, and λ from [10 4, . . . , 103].