Capturing Uncertainty in Unsupervised GPS Trajectory Segmentation Using Bayesian Deep Learning

Authors: Christos Markos, James J. Q. Yu, Richard Yi Da Xu390-398

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments on Microsoft s Geolife dataset (Zheng et al. 2008a,b), BTCN achieved 65.8% timestep-level accuracy without using any labels, outperforming established baselines as well as its non-Bayesian variant. Experiments Simulation Setup All experiments were conducted on a server with an Intel Xeon Silver 4210 CPU clocked at 2.20GHz and n Vidia Ge Force RTX 2080Ti GPUs with 11 GB of GDDR6 memory.
Researcher Affiliation Academia Christos Markos,1,2 James J.Q. Yu,1 Richard Yi Da Xu2 1Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China 2Faculty of Engineering and Information Technology, University of Technology Sydney, Australia
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes This study is evaluated on the Geolife dataset by Microsoft Asia Research (Zheng et al. 2008a,b), which has been widely used for GPS-based trajectory research.
Dataset Splits Yes All trajectories are split into chunks of length M and divided into training and test sets using a 90/10 ratio.
Hardware Specification Yes All experiments were conducted on a server with an Intel Xeon Silver 4210 CPU clocked at 2.20GHz and n Vidia Ge Force RTX 2080Ti GPUs with 11 GB of GDDR6 memory.
Software Dependencies No The paper mentions using the "Adam optimizer with a learning rate of 10-5 and default hyperparameters β1 = 0.9, β2 = 0.999" but does not specify any software libraries or versions (e.g., Python, PyTorch, TensorFlow versions) used for implementation.
Experiment Setup Yes BTCN uses 3 consecutive pairs of temporal residual and SE blocks. Within the i-th temporal residual block, both convolutions combine a larger kernel size of 8 with exponentially increasing dilation rates d = 2i, i {0, 1, 2}. The dropout probability pdrop is set to 0.2 for all dropout layers, as in (Ribeiro et al. 2019); it is maintained at test time as required for MC dropout sampling, from which S = 50 samples are obtained. For all SE blocks, the reduction ratio r is set to 8. BTCN is trained for 200 epochs using the Adam optimizer with a learning rate of 10 5 and default hyperparameters β1 = 0.9, β2 = 0.999.