PREMERE: Meta-Reweighting via Self-Ensembling for Point-of-Interest Recommendation

Authors: Minseok Kim, Hwanjun Song, Doyoung Kim, Kijung Shin, Jae-Gil Lee4164-4171

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Thorough experiments show that replacing a weighting scheme with PREMERE boosts the performance of the state-of-the-art recommender algorithms by 2.36 26.9% on three benchmark datasets.
Researcher Affiliation Academia Minseok Kim, Hwanjun Song, Doyoung Kim, Kijung Shin, Jae-Gil Lee KAIST, Korea {minseokkim, songhwanjun, doyo09, kijungs, jaegil}@kaist.ac.kr
Pseudocode Yes Algorithm 1 PREMERE Training
Open Source Code Yes The source code is available at https://github.com/kaist-dmlab/PREMERE.
Open Datasets Yes We used three popular benchmark datasets, Gowalla (Liu et al. 2017), Foursquare (Yang, Zhang, and Qu 2016), and Yelp (Liu et al. 2017), which are commonly used in the POI recommendation literature (Zhou et al. 2019; Ma et al. 2018).
Dataset Splits No We randomly selected 80% of check-ins as the training set and used the rest 20% of check-ins as the test set in each dataset. The paper mentions 'meta-data (validation) sets' but states they are 'self-generated' and not a fixed, pre-defined split of the original dataset.
Hardware Specification Yes Our implementation was written using Py Torch and tested on Nvidia Tesla V100.
Software Dependencies No The paper states 'Our implementation was written using Py Torch' but does not provide specific version numbers for PyTorch or other software dependencies.
Experiment Setup Yes We used Adam (Kingma and Ba 2015) with a learning rate η = 0.001 and a weight decay 0.001. Regarding three hyperparameter of PREMERE, we fixed the moving average weight α = 0.95 and the history length q = 10... the stability threshold ϵ was set to be 0.25 H(x; q)...