Gamma-Poisson Dynamic Matrix Factorization Embedded with Metadata Influence
Authors: Trong Dinh Thac Do, Longbing Cao
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that m GDMF significantly (both effectively and efficiently) outperforms the state-of-the-art static and dynamic models on large, sparse and dynamic data. |
| Researcher Affiliation | Academia | Trong Dinh Thac Do Advanced Analytics Institute University of Technology Sydney thacdtd@gmail.com Longbing Cao Advanced Analytics Institute University of Technology Sydney longbing.cao@gmail.com |
| Pseudocode | Yes | Algorithm 1 SVI for m GDMF |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the methodology. |
| Open Datasets | Yes | Netflix-Time. Similar procedure as in [37, 16, 34] is taken to obtain a subset of Netflix Prize data [4]... Yelp-Active. A subset of the Yelp Academic Challenge data is obtained similarly to [34]... LFM-Tracks. It contains the number of times a user listened to a song during a given time period [12]: 16 time slices of 0.9K users and 1M tracks (i.e., songs), similar to [34]; |
| Dataset Splits | Yes | We then randomly sample and assign 5% of the test set for validation, similar to [16, 34]. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers used for the experiments. |
| Experiment Setup | Yes | For the static portions, we set a = b = c = d = 0.3 in the same way as in HPF. The metadata hyper-parameters a , b , c and d are set to a small value: 0.1, so that the user/item attribute weights automatically grow over time. We also set aθ = aγ = aθ = bθ = bβ = aι = 1 to keep the chains small at the beginning. We test a wide range of latent components K from 10 to 200 and choose the best K = 50 for m GDMF/GDMF. For SVI hyper-parameters, we assign 10, 000 as the learning rate delay iter0 and 0.7 as the learning rate power ϵ, similar to [34] and [3]. |