Learning Multi-Faceted Prototypical User Interests

Authors: Nhu-Thuat Tran, Hady W. Lauw

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS
Researcher Affiliation Academia Nhu-Thuat Tran Singapore Management University Singapore nttran.2020@phdcs.smu.edu.sg Hady W. Lauw Singapore Management University Singapore hadywlauw@smu.edu.sg
Pseudocode Yes Algorithm 1 presents detailed training procedure of FACETVAE.
Open Source Code Yes The implementation can be found in the link https://github.com/Preferred AI/Facet VAE. and The code, data and related materials can be found in the link https://github.com/ Preferred AI/Facet VAE
Open Datasets Yes We consider three real-world datasets: i) Movie Lens-1M (ML-1M)2 (6,035 users, 3,126 movies, 574,376 ratings); ii) Cite ULike-a3 (5,551 users, 16,945 ariticles, 204,929 interactions); iii) Yelp4 (29,111 users, 22,121 businesses, 1,052,627 reviews). (Footnotes: 2https://grouplens.org/datasets/movielens/, 3http://wanghao.in/CDL.htm, 4https://www.yelp.com/dataset)
Dataset Splits Yes We construct training, validation and test sets by randomly dividing users interactions with ratio 8:1:1, respectively.
Hardware Specification Yes All models are trained on NVIDIA RTX 2080 Ti GPU machine ten times with different random seeds.
Software Dependencies No The paper mentions software like Python and PyTorch (implied by the algorithm and framework) but does not provide specific version numbers for these or other ancillary software dependencies.
Experiment Setup Yes For FACETVAE, we tune the hyper-parameters in the same range as Macrid VAE. Dropout rate is 0.5. σ0 {0.05, 0.075, 0.1}. β = min(β0, update/T), with β0 {0.01, 0.05, 0.1, 0.2, 0.5, 1} and update is the number of model s parameters updates, T {0.1k, 0.5k, 1k, 5k, 10k, 20k}. τ [0.1, 0.2], τdec {0.1, 0.15, 0.2}, τ0 {0.1, 0.2, 0.5, 1, 5, 8}. γ is tanh while γ0 is Leaky ReLU(0.3) for ML-1M and tanh for other datasets. denc = d = 64 for Cite ULike-a and Yelp and denc = 300 for ML-1M. F = J = 3 for Cite ULike-a and ML-1M and F = J = 4 for Yelp. All models are trained on NVIDIA RTX 2080 Ti GPU machine ten times with different random seeds. For FACETVAE, we set maximal number of epochs is 200 and stop training after 15 epochs without improving Recall@20 on validation set.