DeepDPM: Dynamic Population Mapping via Deep Neural Network
Authors: Zefang Zong, Jie Feng, Kechun Liu, Hongzhi Shi, Yong Li1294-1301
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on a real-life mobile dataset collected from Shanghai. Our results demonstrate that Deep DPM outperforms previous state-of-the-art methods and a suite of frequent data-mining approaches. |
| Researcher Affiliation | Academia | Beijing National Research Center for Information Science and Technology Department of Electronic Engineering, Tsinghua University, Beijing, China {zzf15,feng-j16,lkc15,shz17}@mails.tsinghua.edu.cn, liyong07@tsinghua.edu.cn |
| Pseudocode | No | No explicit pseudocode or algorithm block is present. |
| Open Source Code | No | No statement about open-sourcing the code or a link to a repository is provided. |
| Open Datasets | No | We collect our representative real-life mobility dataset from ISP , which contains cellular network access records in 9685 different base stations in Shanghai for 4464 different time slots, from 1st July, 2017 to 31st July, 2017... Also, we collect our Po I dataset from Tencent, which contains 618296 Po I records in 17 categories. The paper describes the collected datasets but does not provide specific access information (link, DOI, or formal citation for public availability). |
| Dataset Splits | Yes | We use 5-fold crossvalidation in the experiment. |
| Hardware Specification | Yes | Each SRCNN is trained independently on a Titan X GPU, and the inference is then executed sequentially on a single Titan X GPU. |
| Software Dependencies | No | Tensorflow is utilized to build and train Deep DPM. The paper mentions TensorFlow but does not provide a specific version number, which is required for reproducibility. |
| Experiment Setup | Yes | Except that 38 38 patches are used in X2-Xfg level and 58 58 in X1-X2 level, all SRCNNs are trained with the same set of parameters.Layer 1 consists of 64 filters of 9x9 kernels, layer 2 consists of 32 filters of 1x1 filters, and the output layer uses a 5x5 kernel. Each network is trained using Adam optimization with a learning rate of 10 4 for the first two layers and 10 5 for the last layers, and MSE loss as the loss function for every training step. Each model is trained for 105 iterations with a batch size of 512. |