Combining Satellite Imagery and Open Data to Map Road Safety

Authors: Alameen Najjar, ShunÕichi Kaneko, Yoshikazu Miyanaga

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To empirically validate the proposed approach, we trained a deep model on satellite images obtained from over 647 thousand traffic-accident reports collected over a period of four years by the New York city Police Department. The best model predicted road safety from raw satellite imagery with an accuracy of 78%. We also used the New York city model to predict for the city of Denver a city-scale map indicating road safety in three levels. Compared to a map made from three years worth of data collected by the Denver city Police Department, the map predicted from raw satellite imagery has an accuracy of 73%.
Researcher Affiliation Academia Alameen Najjar, Shun ichi Kaneko, Yoshikazu Miyanaga Graduate School of Information Science and Technology, Hokkaido University, Japan najjar@hce.ist.hokudai.ac.jp, {kaneko, miya}@ist.hokudai.ac.jp
Pseudocode No The paper describes the methods used in prose but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper states it is 'Making publicly available a deep model' (Contribution 3), but it does not explicitly provide a link or statement that the source code for the methodology described in the paper is openly available.
Open Datasets Yes We used data collected in two US cities (New York and Denver), and it is summarized as follows: 647,868 traffic-accident reports collected by the NYPD over the period between March 2012 and March 20161. 1https://data.cityofnewyork.us/. 110,870 traffic-accident reports collected by the Denver city police department over the period between July 2013 and July 2016. [...] Our models were pre-trained on a generic large-scale image dataset first. Three pre-training datasets were considered: (1) Image Net (Deng et al. 2009), (2) Places205 (Zhou et al. 2014), and (3) both Image Net and Places205 datasets combined.
Dataset Splits No The paper states 'All models in this paper were trained, verified and tested on satellite images...' and 'To evaluate the learned models, we calculated the average prediction accuracy cross-validated on three random 5%/95% data splits.' However, it does not explicitly provide distinct split percentages or counts for a separate validation set, only implying its existence through 'verified'.
Hardware Specification Yes Finally, training was conducted using Caffe deep learning framework (Jia et al. 2014) running on a single Nvidia Ge Force TITAN X GPU.
Software Dependencies No The paper mentions using 'Caffe deep learning framework (Jia et al. 2014)' but does not provide specific version numbers for Caffe or any other key software libraries or dependencies.
Experiment Setup Yes Individual images have a spatial resolution of 256 256 pixels each and crawled at three different zoom levels (18, 19, and 20). Architecture: All Conv Nets used in this experiments follow the Alex Net architecture (Krizhevsky, Sutskever, and Hinton 2012). ... Reported results are obtained after 60,000 training iterations.