Exploring the Context of Locations for Personalized Location Recommendations
Authors: Xin Liu, Yong Liu, Xiaoli Li
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments over four real datasets. Experimental results demonstrate that our approach significantly outperforms the state-of-the-art location recommendation methods. |
| Researcher Affiliation | Academia | Xin Liu, Yong Liu and Xiaoli Li Institute for Infocomm Research (I2R), A*STAR, Singapore {liu-x, liuyo, xlli}@i2r.a-star.edu.sg |
| Pseudocode | No | The paper describes algorithmic steps and equations in paragraph form, but does not contain a clearly labeled "Pseudocode" or "Algorithm" block. |
| Open Source Code | No | The paper mentions "word2vec1" with a footnote linking to "https://code.google.com/p/word2vec/", but this refers to a third-party tool used, not the authors' own implementation code for the described methodology. |
| Open Datasets | Yes | The evaluation is conducted over real-world location-based social network data [Liu et al., 2014] collected from Gowalla 2. The data contains users check-in information, including geographical coordinates, time stamps, etc. generated before June 1, 2011 in 4 US cities: Austin, Los Angeles, Chicago, and Houston. Table 1 summarizes the statistics of the data, where Nu, Nl, and Nc denote the number of users, locations, and check-ins respectively. Moreover, the category information of each observed location has also been collected. Locations in Gowalla are classified into 7 main categories: community, entertainment, food, nightlife, outdoors, shopping, and travel. 2http://www.yongliu.org/datasets. |
| Dataset Splits | No | The paper mentions a train/test split: "For each method, we use the check-in data before March 28th, 2011 (around 80% of all check-ins) to train the models, and the rest data is used for testing." However, it does not specify a separate validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of 'word2vec' but does not specify any version numbers for this or any other software libraries or dependencies used in the implementation. |
| Experiment Setup | Yes | For WRMF, we set the latent factor vector dimensionality, , and regularization parameter to 150, 10, and 0.01 respectively; for PTMF, category level 2 is considered, latent factor vector dimensionality, learning rate, and regularization parameters are set to 5, 0.0001, and 0.01 respectively. For SG-CWARP, the latent factor vector dimensionality, , regularization parameters are set to 200, 1, and 0.01 respectively; we also set optimal context window size for different datasets. For Temp MF, we set the latent factor vector dimensionality, learning rate, user-preference parameter, location-characteristic parameter, and the time regularization parameter to 10, 0.0001, 2, 2, and 1 respectively. |