A Distributed Multi-Sensor Machine Learning Approach to Earthquake Early Warning
Authors: Kevin Fauvel, Daniel Balouek-Thomert, Diego Melgar, Pedro Silva, Anthony Simonet, Gabriel Antoniu, Alexandru Costan, Véronique Masson, Manish Parashar, Ivan Rodero, Alexandre Termier403-411
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that DMSEEW is more accurate than the traditional seismometer-only approach and the combined-sensors (GPS and seismometers) approach that adopts the rule of relative strength. |
| Researcher Affiliation | Academia | 1Univ Rennes, Inria, CNRS, IRISA, Rennes, France 2Rutgers Discovery Informatics Institute, Rutgers University, New Jersey, USA 3Department of Earth Sciences, University of Oregon, Oregon, USA |
| Pseudocode | No | The paper describes the algorithm steps in text and provides a diagram (Figure 1), but it does not include a formal pseudocode block or an algorithm listing. |
| Open Source Code | Yes | In addition, we render public our real-world dataset collected and validated with geoscientists and we make public reference to the code of our machine learning algorithms used. |
| Open Datasets | Yes | We employ a real-world dataset1 (Fauvel et al. 2019) composed of GPS and seismometers data on normal activity/medium earthquakes/large earthquakes collected and validated with geoscientists. 1https://figshare.com/articles/Earthquake_Early_Warning_Dataset/9758555 |
| Dataset Splits | Yes | We performed a stratified k-fold cross-validation which kept the same proportion of earthquakes of different categories for each fold. K is set to 3 considering the number of large earthquakes (14 earthquakes). We present the dataset split in Table 2. |
| Hardware Specification | No | The paper discusses the need for 'high-performance computing techniques and equipments' and 'well-provisioned computing systems' but does not specify any particular hardware components such as CPU or GPU models, or memory specifications used for the experiments. |
| Software Dependencies | No | The paper mentions several software packages like WEASEL+MUSE, MLSTM-FCN, scikit-learn, xgboost, and hyperopt, often referring to their public implementations. However, it generally does not provide specific version numbers for these libraries, which is necessary for full reproducibility. |
| Experiment Setup | Yes | WEASEL+MUSE: we use the public implementation5 with the recommended settings (SFA word lengths l in [2,4,6], windows length in [4:60], chi=2, bias=1, p=0.1, c=5 and a solver equals to L2R LR DUAL) (Sch afer and Leser 2017); MLSTM-FCN, we test the public implementation6 based on the original paper (Fazle, Majumdar, and Harford 2018), using the recommended settings (128-256-128 filters, 250 training epochs, a dropout of 0.8 and a batch size of 128); ... Hyperparameters of classifiers at central level are set by hyperopt, a sequential model-based optimization using a tree of Parzen estimators search algorithm (Bergstra, Yamins, and Cox 2013). |