Road Network Representation Learning with the Third Law of Geography
Authors: Haicang Zhou, Weiming Huang, Yile Chen, Tiantian He, Gao Cong, Yew Soon Ong
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our framework on two real-world datasets across three downstream tasks. The results show that the integration of the Third Law significantly improves the performance of road segment representations in downstream tasks. Our code is available at https://github.com/Haicang/Garner. |
| Researcher Affiliation | Academia | 1 College of Computing and Data Science, Nanyang Technological University, Singapore 2 Institute of High-Performance Computing, Agency for Science, Technology and Research, Singapore 3 Centre for Frontier Al Research, Agency for Science, Technology and Research, Singapore |
| Pseudocode | No | The paper describes its approach using text and mathematical equations but does not include a distinct 'Pseudocode' or 'Algorithm' block or figure. |
| Open Source Code | Yes | Our code is available at https://github.com/Haicang/Garner. |
| Open Datasets | Yes | We use data from two cities, i.e. Singapore and New York City (NYC). The datasets include road networks from Open Street Map (OSM, [27]) and street view images (SVIs) from Google Map ([7]). |
| Dataset Splits | No | The paper refers to 'train', 'validation', and 'test' in the context of evaluation metrics and general machine learning practices, but does not explicitly provide the specific percentages or counts for the train/validation/test splits used in their experiments. |
| Hardware Specification | Yes | All the experiments are executed on a Ubuntu Server (Ubuntu 20.04), with 8 Nvidia Tesla V100 (32GB) GPUs, Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (40 cores and 80 threads) and 512 GB memory. |
| Software Dependencies | Yes | All the code is implemented with Python=3.11.8, Py Torch=2.1 (CUDA=11.8) [28], DGL=2.1 [41]. |
| Experiment Setup | Yes | We use the Adam optimizer [16] with the learning rate as 0.001 and set the training iterations as 2500 with early stopping. The sampling size is set as 4000. For the settings of baselines, we follow their default setting but set the dimension of representation as 512, the same as our method. The road features, and image embeddings are projected into 256 dimensions. The k = 6 in k NN graph for geographic configuration aware graph augmentation, and d = 22 for spectral negative sampling. The hyper-parameters for graph diffusion is α = 0.2, as suggested by [18]. The hidden dimension and the dimension of the representation are set as 512. |