Predicting Landslides Using Locally Aligned Convolutional Neural Networks
Authors: Ainaz Hajimoradlou, Gioachino Roberti, David Poole
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate our method, we created a standardized dataset of georeferenced images consisting of the heterogeneous features as inputs, and compared our method to several baselines, including linear regression, a neural network, and a convolutional network, using log-likelihood error and Receiver Operating Characteristic curves on the test set. Our model achieves 2-7% improvement in terms of accuracy and 2-15% boost in terms of log likelihood compared to the other proposed baselines. |
| Researcher Affiliation | Collaboration | Ainaz Hajimoradlou1 , Gioachino Roberti2 and David Poole1 1University of British Columbia 2Minerva Intelligence {ainaz, poole}@cs.ubc.ca, groberti@minervaintelligence.com |
| Pseudocode | No | The paper includes a diagram of the LACNN architecture (Figure 2) and descriptions of its components, but it does not provide any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available publicly here: https://github.com/ainaz Hjm/Landslide Prediction/ |
| Open Datasets | Yes | We provide a standardized dataset so that others can compare their results to ours. This dataset is compiled from public domain data from various sources... The instructions on how to access data are available for other researchers here: https://github.com/ainaz Hjm/Veneto Italy/. |
| Dataset Splits | Yes | We randomly partitioned image patches such that 80% of the data is used for training, 10% for testing, and the other 10% for validation (refer to Figure 3). |
| Hardware Specification | Yes | The rasters in the dataset are too large to fit into a 12 GB memory of a Titan XP GPU when training. |
| Software Dependencies | No | The paper mentions implementing the model and providing code, but it does not specify any software dependencies (e.g., Python, PyTorch, TensorFlow) with their version numbers. |
| Experiment Setup | Yes | Table 1 shows the hyper-parameters used for training each of these models. We optimized the learning rate and the optimizer with 5-fold cross-validation for one epoch. The batch size is chosen such that we can fit the maximum number of patches in the memory. The number of epochs is chosen to fully train each model. We validate our models at each epoch and reduce the learning rate if the validation error keeps increasing for patience number of epochs to avoid overfitting. We chose patience = 2 and decay = 0.001, which is the L2 regularization lambda, in our experiments. Table 1 also explicitly lists OPTIMIZER, LR, EPOCHS, and BS for each model. |