Crisis-DIAS: Towards Multimodal Damage Analysis – Deployment, Challenges and Assessment

Authors: Mansi Agarwal, Maitree Leekha, Ramit Sawhney, Rajiv Ratn Shah346-353

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive qualitative, quantitative and theoretical analysis on a real-world multi-modal social media dataset, we show that the Crisis-DIAS framework is superior to the stateof-the-art damage assessment models in terms of bias, responsiveness, computational efficiency, and assessment performance.
Researcher Affiliation Academia 1Delhi Technological University, New Delhi, India 2Netaji Subhas Institute of Technology, New Delhi, India 3Indraprastha Institute of Information Technology, New Delhi, India
Pseudocode No No pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not explicitly state that source code for the methodology is available or provide a link.
Open Datasets Yes In this work, we have used the first multimodal, labeled, publicly available damage related Twitter dataset, Crisis MMD, created by (Alam, Ofli, and Imran 2018a). The dataset was collected by crawling the blogs posted by users during seven natural disasters, including floods, wildfires, hurricanes and earthquakes. It is hierarchical, i.e., the class labels at each stage depend on the annotation of the previous stage.
Dataset Splits Yes We use Stratified 5 fold cross-validation to establish our results.
Hardware Specification Yes All the experiments were run on a Ge Force GTX 1080 Ti GPU with memory speed of 11 Gbps.
Software Dependencies No The paper mentions Inception-v3 model, RCNN, LSTM, and Adam optimizer but does not provide specific version numbers for these or other software dependencies like Python, TensorFlow, PyTorch etc.
Experiment Setup Yes For the RCNN, we use LSTM layer with hidden dimension 64 to capture the contextual dependencies. The final feature vector dimension (before the softmax layer) is 128 in case of text models and 1024 for image models. We train the models using early stopping with a batch size of 64. We use Adam optimizer with an initial learning rate of 0.001, and the values of β1 and β2 as 0.9 and 0.999, respectively.