Time-Sensitive Recommendation From Recurrent User Activities
Authors: Nan Du, Yichen Wang, Niao He, Jimeng Sun, Le Song
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared to other state-of-the-arts in both synthetic and real datasets, our model achieves superb predictive performance in the two time-sensitive recommendation tasks. |
| Researcher Affiliation | Academia | Nan Du , Yichen Wang , Niao He , Le Song College of Computing, Georgia Tech H. Milton Stewart School of Industrial & System Engineering, Georgia Tech dunan@gatech.edu, yichen.wang@gatech.edu, nhe6@gatech.edu lsong@cc.gatech.edu |
| Pseudocode | Yes | Algorithm 1: Learning Hawkes-Recommender; Algorithm 2: Prox U k 1 ηk 1(f(U k 1)); Algorithm 3: LMOψ 2(f(U k 1)) |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We also evaluate the proposed method on real datasets. last.fm consists of the music streaming logs between 1,000 users and 3,000 artists. There are around 20,000 observed user-artist pairs with more than one million events in total. tmall.com contains around 100K shopping events between 26,376 users and 2,563 stores. The unit time for both dataset is hour. MIMIC II medical dataset is a collection of de-identified clinical visit records of Intensive Care Unit patients for seven years. |
| Dataset Splits | Yes | For each user, we randomly pick 20-percent of all the items she has consumed and hold out the entire sequence of events. Besides, for each sequence of the other 80-percent items, we further split it into a pair of training/testing subsequences. |
| Hardware Specification | No | The paper does not specify any details about the hardware used for running the experiments (e.g., CPU/GPU models, memory). |
| Software Dependencies | No | The paper mentions blending optimization methods (proximal gradient, conditional gradient) but does not specify any software dependencies, libraries, or their version numbers used for implementation. |
| Experiment Setup | Yes | By theorem 1, it is inefficient to directly estimate the exact value of the threshold value for ρ. Instead, we tune ρ, λ and β to give the best performance. ... The bandwidth for the triggering kernel is fixed to one. |