Minimizing User Involvement for Learning Human Mobility Patterns from Location Traces
Authors: Basma Alharbi, Abdulhakim Qahtan, Xiangliang Zhang
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the new representation in a link prediction task and compare our results to those of baseline approaches. We evaluate the new representation on social link prediction with two data sets, one of mobile records (MDC) and the other is extracted from an LBSN (GW). We compare our results against a number of baseline methods and show that our new representation has superior performance. |
| Researcher Affiliation | Academia | Basma Alharbi, Abdulhakim Qahtan, Xiangliang Zhang Computer, Electrical and Mathematical Sciences & Engineering Division King Abdullah University of Science & Technology (KAUST) Thuwal, 23955, Saudi Arabia |
| Pseudocode | Yes | Algorithm 1: Parameter Estimation for Hu Mo R |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology or links to a code repository. |
| Open Datasets | Yes | We evaluate our approach on two publicly available datasets: MDC and GW. MDC is a CDR data... (Laurila et al. 2012). GW is an LBSN... (Cho, Myers, and Leskovec 2011). |
| Dataset Splits | No | The paper mentions 'We train all graphical models for 1000 iterations', but it does not provide specific train/validation/test dataset split percentages, sample counts, or a detailed splitting methodology for their data. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance specifications used for running its experiments. |
| Software Dependencies | No | The paper mentions 'MALLET (Mc Callum 2002)' but does not provide specific version numbers for MALLET or any other software dependencies. |
| Experiment Setup | Yes | We train all graphical models for 1000 iterations, and optimize parameters every 50 iterations, after an initial burn-in period of 200. The common parameter β for all graphical models is set to be β = 0.01. For Hu Mo R, we use time range as a sequence feature, where we divided the day into four equal intervals. The first n 1 elements in λ, associated with sequence features, are drawn from a Gaussian distribution with σ2 = 0.5. The nth element, associated with the motif default value, is drawn from a Gaussian distribution with σ2 = 100. Regarding the number of motifs K, we study its sensitivity after comparing the performance of different models, where K is set to 5. |