Crowd-Assisted Disaster Scene Assessment with Human-AI Interactive Attention

Authors: Daniel (Yue) Zhang, Yifeng Huang, Yang Zhang, Dong Wang2717-2724

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluation results on real-world case studies during Nepal and Ecuador earthquake events demonstrate that i DSA can significantly outperform state-of-the-art baselines in accurately assessing the damage of disaster scenes.
Researcher Affiliation Academia Daniel (Yue) Zhang, Yifeng Huang, Yang Zhang, Dong Wang Department of Computer Science and Engineering, University of Notre Dame Notre Dame, IN 46556, USA Email: {yzhang40, yhuang24, yzhang42, dwang5}@nd.edu
Pseudocode No The paper describes its methods and components in prose and mathematical equations but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code, providing a repository link, or making the code available in supplementary materials.
Open Datasets Yes We use a dataset (Nguyen et al. 2017) that consists of a total of 21,384 social media images related to two disaster events the 2016 Ecuador Earthquake (2,280 images) and the 2015 Nepal Earthquake (19,104 images).
Dataset Splits Yes In our experiments, the dataset is split into a training set and a test set. The training set contains all 19,104 images from Nepal Earthquake and the test set includes all images from the Equador Earthquake.
Hardware Specification Yes All compared schemes were run on a server with Intel Xeon E5-2637 v4 3.50GHz CPU and 4 NVIDIA GTX 1080Ti GPUs.
Software Dependencies No The paper mentions several tools and models used (e.g., 'YOLO V3', 'Label Me tool', 'VGG19 model'), but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or specific library versions).
Experiment Setup Yes We initialize our model with the pre-trained VGG19 model for all convolutional blocks, and fine-tune it using disaster-related images. For each crowd response, we assign 6 incentive levels (2 cents, 4 cents, 6 cents, 8 cents, 10 cents, and 20 cents) decided by the BCAI module. We set the Θ to be a relatively small value so it provides a rough filtering of non-important regions based on the CAMs.